69 resultados para Odd third order intensity parameters


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Glenwood Homes Pty Ltd v Everhard [2008] QSC 192 involved the not uncommon situation where one costs order is made against several parties represented by a single firm of solicitors. Dutney J considered the implications when only some of the parties liable for the payment of the costs file a notice of objection to the costs statement served in respect of those costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effectiveness of higher-order spectral (HOS) phase features in speaker recognition is investigated by comparison with Mel Cepstral features on the same speech data. HOS phase features retain phase information from the Fourier spectrum unlikeMel–frequency Cepstral coefficients (MFCC). Gaussian mixture models are constructed from Mel– Cepstral features and HOS features, respectively, for the same data from various speakers in the Switchboard telephone Speech Corpus. Feature clusters, model parameters and classification performance are analyzed. HOS phase features on their own provide a correct identification rate of about 97% on the chosen subset of the corpus. This is the same level of accuracy as provided by MFCCs. Cluster plots and model parameters are compared to show that HOS phase features can provide complementary information to better discriminate between speakers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although internet chat is a significant aspect of many internet users’ lives, the manner in which participants in quasi-synchronous chat situations orient to issues of social and moral order remains to be studied in depth. The research presented here is therefore at the forefront of a continually developing area of study. This work contributes new insights into how members construct and make accountable the social and moral orders of an adult-oriented Internet Relay Chat (IRC) channel by addressing three questions: (1) What conversational resources do participants use in addressing matters of social and moral order? (2) How are these conversational resources deployed within IRC interaction? and (3) What interactional work is locally accomplished through use of these resources? A survey of the literature reveals considerable research in the field of computer-mediated communication, exploring both asynchronous and quasi-synchronous discussion forums. The research discussed represents a range of communication interests including group and collaborative interaction, the linguistic construction of social identity, and the linguistic features of online interaction. It is suggested that the present research differs from previous studies in three ways: (1) it focuses on the interaction itself, rather than the ways in which the medium affects the interaction; (2) it offers turn-by-turn analysis of interaction in situ; and (3) it discusses membership categories only insofar as they are shown to be relevant by participants through their talk. Through consideration of the literature, the present study is firmly situated within the broader computer-mediated communication field. Ethnomethodology, conversation analysis and membership categorization analysis were adopted as appropriate methodological approaches to explore the research focus on interaction in situ, and in particular to investigate the ways in which participants negotiate and co-construct social and moral orders in the course of their interaction. IRC logs collected from one chat room were analysed using a two-pass method, based on a modification of the approaches proposed by Pomerantz and Fehr (1997) and ten Have (1999). From this detailed examination of the data corpus three interaction topics are identified by means of which participants clearly orient to issues of social and moral order: challenges to rule violations, ‘trolling’ for cybersex, and experiences regarding the 9/11 attacks. Instances of these interactional topics are subjected to fine-grained analysis, to demonstrate the ways in which participants draw upon various interactional resources in their negotiation and construction of channel social and moral orders. While these analytical topics stand alone in individual focus, together they illustrate different instances in which participants’ talk serves to negotiate social and moral orders or collaboratively construct new orders. Building on the work of Vallis (2001), Chapter 5 illustrates three ways that rule violation is initiated as a channel discussion topic: (1) through a visible violation in open channel, (2) through an official warning or sanction by a channel operator regarding the violation, and (3) through a complaint or announcement of a rule violation by a non-channel operator participant. Once the topic has been initiated, it is shown to become available as a topic for others, including the perceived violator. The fine-grained analysis of challenges to rule violations ultimately demonstrates that channel participants orient to the rules as a resource in developing categorizations of both the rule violation and violator. These categorizations are contextual in that they are locally based and understood within specific contexts and practices. Thus, it is shown that compliance with rules and an orientation to rule violations as inappropriate within the social and moral orders of the channel serves two purposes: (1) to orient the speaker as a group member, and (2) to reinforce the social and moral orders of the group. Chapter 6 explores a particular type of rule violation, solicitations for ‘cybersex’ known in IRC parlance as ‘trolling’. In responding to trolling violations participants are demonstrated to use affiliative and aggressive humour, in particular irony, sarcasm and insults. These conversational resources perform solidarity building within the group, positioning non-Troll respondents as compliant group members. This solidarity work is shown to have three outcomes: (1) consensus building, (2) collaborative construction of group membership, and (3) the continued construction and negotiation of existing social and moral orders. Chapter 7, the final data analysis chapter, offers insight into how participants, in discussing the events of 9/11 on the actual day, collaboratively constructed new social and moral orders, while orienting to issues of appropriate and reasonable emotional responses. This analysis demonstrates how participants go about ‘doing being ordinary’ (Sacks, 1992b) in formulating their ‘first thoughts’ (Jefferson, 2004). Through sharing their initial impressions of the event, participants perform support work within the interaction, in essence working to normalize both the event and their initial misinterpretation of it. Normalising as a support work mechanism is also shown in relation to participants constructing the ‘quiet’ following the event as unusual. Normalising is accomplished by reference to the indexical ‘it’ and location formulations, which participants use both to negotiate who can claim to experience the ‘unnatural quiet’ and to identify the extent of the quiet. Through their talk participants upgrade the quiet from something legitimately experienced by one person in a particular place to something that could be experienced ‘anywhere’, moving the phenomenon from local to global provenance. With its methodological design and detailed analysis and findings, this research contributes to existing knowledge in four ways. First, it shows how rules are used by participants as a resource in negotiating and constructing social and moral orders. Second, it demonstrates that irony, sarcasm and insults are three devices of humour which can be used to perform solidarity work and reinforce existing social and moral orders. Third, it demonstrates how new social and moral orders are collaboratively constructed in relation to extraordinary events, which serve to frame the event and evoke reasonable responses for participants. And last, the detailed analysis and findings further support the use of conversation analysis and membership categorization as valuable methods for approaching quasi-synchronous computer-mediated communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates the everyday practices of young children acting in their social worlds within the context of the school playground. It employs an ethnographic ethnomethodological approach using conversation analysis. In the context of child participation rights advanced by the United Nations Convention on the Rights of the Child (UNCRC) and childhood studies, the study considers children’s social worlds and their participation agendas. The participants of the study were a group of young children in a preparatory year setting in a Queensland school. These children, aged 4 to 6 years, were videorecorded as they participated in their day-to-day activities in the classroom and in the playground. Data collection took place over a period of three months, with a total of 26 hours of video data. Episodes of the video-recordings were shown to small groups of children and to the teacher to stimulate conversations about what they saw on the video. The conversations were audio-recorded. This method acknowledged the child’s standpoint and positioned children as active participants in accounting for their relationships with others. These accounts are discussed as interactionally built comments on past joint experiences and provided a starting place for analysis of the video-recorded interaction. Four data chapters are presented in this thesis. Each data chapter investigates a different topic of interaction. The topics include how children use “telling” as a tactical tool in the management of interactional trouble, how children use their “ideas” as possessables to gain ownership of a game and the interactional matters that follow, how children account for interactional matters and bid for ownership of “whose idea” for the game and finally, how a small group of girls orientated to a particular code of conduct when accounting for their actions in a pretend game of “school”. Four key themes emerged from the analysis. The first theme addresses two arenas of action operating in the social world of children, pretend and real: the “pretend”, as a player in a pretend game, and the “real”, as a classroom member. These two arenas are intertwined. Through inferences to explicit and implicit “codes of conduct”, moral obligations are invoked as children attempt to socially exclude one another, build alliances and enforce their own social positions. The second theme is the notion of shared history. This theme addresses the history that the children reconstructed, and acts as a thread that weaves through their interactions, with implications for present and future relationships. The third theme is around ownership. In a shared context, such as the playground, ownership is a highly contested issue. Children draw on resources such as rules, their ideas as possessables, and codes of behaviour as devices to construct particular social and moral orders around owners of the game. These themes have consequences for children’s participation in a social group. The fourth theme, methodological in nature, shows how the researcher was viewed as an outsider and novice and was used as a resource by the children. This theme is used to inform adult-child relationships. The study was situated within an interest in participation rights for children and perspectives of children as competent beings. Asking children to account for their participation in playground activities situates children as analysers of their own social worlds and offers adults further information for understanding how children themselves construct their social interactions. While reporting on the experiences of one group of children, this study opens up theoretical questions about children’s social orders and these influences on their everyday practices. This thesis uncovers how children both participate in, and shape, their everyday social worlds through talk and interaction. It investigates the consequences that taken-for-granted activities of “playing the game” have for their social participation in the wider culture of the classroom. Consideration of this significance may assist adults to better understand and appreciate the social worlds of young children in the school playground.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on analogies in science education has focussed on student interpretation of teacher and textbook analogies, psychological aspects of learning with analogies and structured approaches for teaching with analogies. Few studies have investigated how analogies might be pivotal in students’ growing participation in chemical discourse. To study analogies in this way requires a sociocultural perspective on learning that focuses on ways in which language, signs, symbols and practices mediate participation in chemical discourse. This study reports research findings from a teacher-research study of two analogy-writing activities in a chemistry class. The study began with a theoretical model, Third Space, which informed analyses and interpretation of data. Third Space was operationalized into two sub-constructs called Dialogical Interactions and Hybrid Discourses. The aims of this study were to investigate sociocultural aspects of learning chemistry with analogies in order to identify classroom activities where students generate Dialogical Interactions and Hybrid Discourses, and to refine the operationalization of Third Space. These aims were addressed through three research questions. The research questions were studied through an instrumental case study design. The study was conducted in my Year 11 chemistry class at City State High School for the duration of one Semester. Data were generated through a range of data collection methods and analysed through discourse analysis using the Dialogical Interactions and Hybrid Discourse sub-constructs as coding categories. Results indicated that student interactions differed between analogical activities and mathematical problem-solving activities. Specifically, students drew on discourses other than school chemical discourse to construct analogies and their growing participation in chemical discourse was tracked using the Third Space model as an interpretive lens. Results of this study led to modification of the theoretical model adopted at the beginning of the study to a new model called Merged Discourse. Merged Discourse represents the mutual relationship that formed during analogical activities between the Analog Discourse and the Target Discourse. This model can be used for interpreting and analysing classroom discourse centred on analogical activities from sociocultural perspectives. That is, it can be used to code classroom discourse to reveal students’ growing participation with chemical (or scientific) discourse consistent with sociocultural perspectives on learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An algorithm based on the concept of Kalman filtering is proposed in this paper for the estimation of power system signal attributes, like amplitude, frequency and phase angle. This technique can be used in protection relays, digital AVRs, DSTATCOMs, FACTS and other power electronics applications. Furthermore this algorithm is particularly suitable for the integration of distributed generation sources to power grids when fast and accurate detection of small variations of signal attributes are needed. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations are presented to highlight the usefulness of the proposed approach. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To determine the effect of acute bouts of moderate- and high-intensity walking exercise on non-exercise activity thermogenesis (NEAT) in overweight and obese adults. ---------- METHOD: 16 participants performed a single bout of either moderate-intensity walking exercise (MIE) or high-intensity walking exercise (HIE) on two separate occasions. The MIE consisted of walking for 60 minutes on a motorized treadmill at 6 km.h. The 60-minute HIE session consisted of walking in 5-min intervals at 6 km.h and 10% grade followed by 5-min at 0% grade. NEAT was assessed by accelerometer on three days before, the day of, and three days following the exercise sessions. ---------- RESULTS: There was no significant difference in NEAT vector magnitude (counts.min) between the pre-exercise period (days 1-3) and the exercise day (day 4) for either MIE or HIE protocol. In addition, there was no change in NEAT during the three days following the MIE session, however NEAT increased by 16% on day 7 (post-exercise) compared with exercise day (P = 0.32). However during the post-exercise period following the HIE session, NEAT was increased by 25% on day 7 compared with the exercise day (P = 0.08), and by 30-33% compared with pre-exercise period (day 1, day 2 and day 3); P = 0.03, 0.03, 0.02, respectively. ---------- CONCLUSION: A single bout of either MIE or HIE did not alter NEAT on the exercise day or on the first two days following the exercise session. However, monitoring NEAT on a third day allowed the detection of a 48-h delay in increased NEAT after performing HIE. A longer-term intervention is needed to determine the effect of accumulated exercise sessions over a week on NEAT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, the application of heterogeneous photocatalytic water purification process has gained wide attention due to its effectiveness in degrading and mineralizing the recalcitrant organic compounds as well as the possibility of utilizing the solar UV and visible light spectrum. This paper aims to review and summarize the recently published works on the titanium dioxide (TiO2) photocatalytic oxidation of pesticides and phenolic compounds, predominant in storm and waste water effluents. The effect of various operating parameters on the photocatalytic degradation of pesticides and phenols are discussed. Results reported here suggested that the photocatalytic degradation of organic compounds depends on the type of photocatalyst and composition, light intensity, initial substrate concentration, amount of catalyst, pH of the reaction medium, ionic components in water, solvent types, oxidizing agents/electron acceptors, catalyst application mode, and calcinations temperature in water environment. A substantial amount of research has focused on the enhancement of TiO2 photocatalysis by modification with metal, non-metal and ion doping. Recent developments in TiO2 photocatalysis for the degradation of various pesticides and phenols are also highlighted in this review. It is evident from the literature survey that photocatalysis has shown good potential for the removal of various organic pollutants. However, still there is a need to find out the practical utility of this technique on commercial scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Differential axial deformation between column elements and shear wall elements of cores increase with building height and geometric complexity. Adverse effects due to the differential axial deformation reduce building performance and life time serviceability. Quantifying axial deformations using ambient measurements from vibrating wire, external mechanical and electronic strain gauges in order to acquire adequate provisions to mitigate the adverse effects is well established method. However, these gauges require installing in or on elements to acquire continuous measurements and hence use of these gauges is uneconomical and inconvenient. This motivates to develop a method to quantify the axial deformations. This paper proposes an innovative method based on modal parameters to quantify axial deformations of shear wall elements in cores of buildings. Capabilities of the method are presented though an illustrative example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Upper Roper River is one of the Australia’s unique tropical rivers which have been largely untouched by development. The Upper Roper River catchment comprises the sub-catchments of the Waterhouse River and Roper Creek, the two tributaries of the Roper River. There is a complex geological setting with different aquifer types. In this seasonal system, close interaction between surface water and groundwater contributes to both streamflow and sustaining ecosystems. The interaction is highly variable between seasons. A conceptual hydrogeological model was developed to investigate the different hydrological processes and geochemical parameters, and determine the baseline characteristics of water resources of this pristine catchment. In the catchment, long term average rainfall is around 850 mm and is summer dominant which significantly influences the total hydrological system. The difference between seasons is pronounced, with high rainfall up to 600 mm/month in the wet season, and negligible rainfall in the dry season. Canopy interception significantly reduces the amount of effective rainfall because of the native vegetation cover in the pristine catchment. Evaporation exceeds rainfall the majority of the year. Due to elevated evaporation and high temperature in the tropics, at least 600 mm of annual rainfall is required to generate potential recharge. Analysis of 120 years of rainfall data trend helped define “wet” and “dry periods”: decreasing trend corresponds to dry periods, and increasing trend to wet periods. The period from 1900 to 1970 was considered as Dry period 1, when there were years with no effective rainfall, and if there was, the intensity of rainfall was around 300 mm. The period 1970 – 1985 was identified as the Wet period 2, when positive effective rainfall occurred in almost every year, and the intensity reached up to 700 mm. The period 1985 – 1995 was the Dry period 2, with similar characteristics as Dry period 1. Finally, the last decade was the Wet period 2, with effective rainfall intensity up to 800 mm. This variability in rainfall over decades increased/decreased recharge and discharge, improving/reducing surface water and groundwater quantity and quality in different wet and dry periods. The stream discharge follows the rainfall pattern. In the wet season, the aquifer is replenished, groundwater levels and groundwater discharge are high, and surface runoff is the dominant component of streamflow. Waterhouse River contributes two thirds and Roper Creek one third to Roper River flow. As the dry season progresses, surface runoff depletes, and groundwater becomes the main component of stream flow. Flow in Waterhouse River is negligible, the Roper Creek dries up, but the Roper River maintains its flow throughout the year. This is due to the groundwater and spring discharge from the highly permeable Tindall Limestone and tufa aquifers. Rainfall seasonality and lithology of both the catchment and aquifers are shown to influence water chemistry. In the wet season, dilution of water bodies by rainwater is the main process. In the dry season, when groundwater provides baseflow to the streams, their chemical composition reflects lithology of the aquifers, in particular the karstic areas. Water chemistry distinguishes four types of aquifer materials described as alluvium, sandstone, limestone and tufa. Surface water in the headwaters of the Waterhouse River, the Roper Creek and their tributaries are freshwater, and reflect the alluvium and sandstone aquifers. At and downstream of the confluence of the Roper River, river water chemistry indicates the influence of rainfall dilution in the wet season, and the signature of the Tindall Limestone and tufa aquifers in the dry. Rainbow Spring on the Waterhouse River and Bitter Spring on the Little Roper River (known as Roper Creek at the headwaters) discharge from the Tindall Limestone. Botanic Walk Spring and Fig Tree Spring discharge into the Roper River from tufa. The source of water was defined based on water chemical composition of the springs, surface and groundwater. The mechanisms controlling surface water chemistry were examined to define the dominance of precipitation, evaporation or rock weathering on the water chemical composition. Simple water balance models for the catchment have been developed. The important aspects to be considered in water resource planning of this total system are the naturally high salinity in the region, especially the downstream sections, and how unpredictable climate variation may impact on the natural seasonal variability of water volumes and surface-subsurface interaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An approach to pattern recognition using invariant parameters based on higher-order spectra is presented. In particular, bispectral invariants are used to classify one-dimensional shapes. The bispectrum, which is translation invariant, is integrated along straight lines passing through the origin in bifrequency space. The phase of the integrated bispectrum is shown to be scale- and amplification-invariant. A minimal set of these invariants is selected as the feature vector for pattern classification. Pattern recognition using higher-order spectral invariants is fast, suited for parallel implementation, and works for signals corrupted by Gaussian noise. The classification technique is shown to distinguish two similar but different bolts given their one-dimensional profiles