47 resultados para Hutchby, Ian: Conversation analysis. Principles, practices and application
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The Wigner higher order moment spectra (WHOS)are defined as extensions of the Wigner-Ville distribution (WD)to higher order moment spectra domains. A general class oftime-frequency higher order moment spectra is also defined interms of arbitrary higher order moments of the signal as generalizations of the Cohen’s general class of time-frequency representations. The properties of the general class of time-frequency higher order moment spectra can be related to theproperties of WHOS which are, in fact, extensions of the properties of the WD. Discrete time and frequency Wigner higherorder moment spectra (DTF-WHOS) distributions are introduced for signal processing applications and are shown to beimplemented with two FFT-based algorithms. One applicationis presented where the Wigner bispectrum (WB), which is aWHOS in the third-order moment domain, is utilized for thedetection of transient signals embedded in noise. The WB iscompared with the WD in terms of simulation examples andanalysis of real sonar data. It is shown that better detectionschemes can be derived, in low signal-to-noise ratio, when theWB is applied.
Resumo:
Combining headspace (HS) sampling with a needle-trap device (NTD) to determine priority volatile organic compounds (VOCs) in water samples results in improved sensitivity and efficiency when compared to conventional static HS sampling. A 22 gauge stainless steel, 51-mm needle packed with Tenax TA and Carboxen 1000 particles is used as the NTD. Three different HS-NTD sampling methodologies are evaluated and all give limits of detection for the target VOCs in the ng L−1 range. Active (purge-and-trap) HS-NTD sampling is found to give the best sensitivity but requires exhaustive control of the sampling conditions. The use of the NTD to collect the headspace gas sample results in a combined adsorption/desorption mechanism. The testing of different temperatures for the HS thermostating reveals a greater desorption effect when the sample is allowed to diffuse, whether passively or actively, through the sorbent particles. The limits of detection obtained in the simplest sampling methodology, static HS-NTD (5 mL aqueous sample in 20 mL HS vials, thermostating at 50 °C for 30 min with agitation), are sufficiently low as to permit its application to the analysis of 18 priority VOCs in natural and waste waters. In all cases compounds were detected below regulated levels
Resumo:
Case-crossover is one of the most used designs for analyzing the health-related effects of air pollution. Nevertheless, no one has reviewed its application and methodology in this context. Objective: We conducted a systematic review of case-crossover (CCO) designs used to study the relationship between air pollution and morbidity and mortality, from the standpoint of methodology and application.Data sources and extraction: A search was made of the MEDLINE and EMBASE databases.Reports were classified as methodologic or applied. From the latter, the following information was extracted: author, study location, year, type of population (general or patients), dependent variable(s), independent variable(s), type of CCO design, and whether effect modification was analyzed for variables at the individual level. Data synthesis: The review covered 105 reports that fulfilled the inclusion criteria. Of these, 24 addressed methodological aspects, and the remainder involved the design’s application. In the methodological reports, the designs that yielded the best results in simulation were symmetric bidirectional CCO and time-stratified CCO. Furthermore, we observed an increase across time in the use of certain CCO designs, mainly symmetric bidirectional and time-stratified CCO. The dependent variables most frequently analyzed were those relating to hospital morbidity; the pollutants most often studied were those linked to particulate matter. Among the CCO-application reports, 13.6% studied effect modification for variables at the individual level.Conclusions: The use of CCO designs has undergone considerable growth; the most widely used designs were those that yielded better results in simulation studies: symmetric bidirectional and time-stratified CCO. However, the advantages of CCO as a method of analysis of variables at the individual level are put to little use
Resumo:
This work deals with coming and going verbs in three typologically distinct languages: a Romance one (Spanish), a Germanic one (German) and a Slavic one (Polish) within Fillmore`s (1966, 1971, 1975, 1982, 1983) framework. On the grounds of data description it is shown that visible linguistic phenomena, such as deixis or Aktionsart related clearly to the coming and going verbs, can be transcended, resorting to the more abstract notion called by the author, following Winston (1987) and Speas and Tenny (2003), “viewpoint”.
Resumo:
This paper presents an outline of rationale and theory of the MuSIASEM scheme (Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism). First, three points of the rationale behind our MuSIASEM scheme are discussed: (i) endosomatic and exosomatic metabolism in relation to Georgescu-Roegen’s flow-fund scheme; (2) the bioeconomic analogy of hypercycle and dissipative parts in ecosystems; (3) the dramatic reallocation of human time and land use patterns in various sectors of modern economy. Next, a flow-fund representation of the MUSIASEM scheme on three levels (the whole national level, the paid work sectors level, and the agricultural sector level) is illustrated to look at the structure of the human economy in relation to two primary factors: (i) human time - a fund; and (ii) exosomatic energy - a flow. The three levels representation uses extensive and intensive variables simultaneously. Key conceptual tools of the MuSIASEM scheme - mosaic effects and impredicative loop analysis - are explained using the three level flow-fund representation. Finally, we claim that the MuSIASEM scheme can be seen as a multi-purpose grammar useful to deal with sustainability issues.
Resumo:
This study presents a first attempt to extend the “Multi-scale integrated analysis of societal and ecosystem metabolism (MuSIASEM)” approach to a spatial dimension using GIS techniques in the Metropolitan area of Barcelona. We use a combination of census and commercial databases along with a detailed land cover map to create a layer of Common Geographic Units that we populate with the local values of human time spent in different activities according to MuSIASEM hierarchical typology. In this way, we mapped the hours of available human time, in regards to the working hours spent in different locations, putting in evidence the gradients in spatial density between the residential location of workers (generating the work supply) and the places where the working hours are actually taking place. We found a strong three-modal pattern of clumps of areas with different combinations of values of time spent on household activities and on paid work. We also measured and mapped spatial segregation between these two activities and put forward the conjecture that this segregation increases with higher energy throughput, as the size of the functional units must be able to cope with the flow of exosomatic energy. Finally, we discuss the effectiveness of the approach by comparing our geographic representation of exosomatic throughput to the one issued from conventional methods.
Resumo:
When dealing with sustainability we are concerned with the biophysical as well as the monetary aspects of economic and ecological interactions. This multidimensional approach requires that special attention is given to dimensional issues in relation to curve fitting practice in economics. Unfortunately, many empirical and theoretical studies in economics, as well as in ecological economics, apply dimensional numbers in exponential or logarithmic functions. We show that it is an analytical error to put a dimensional unit x into exponential functions ( a x ) and logarithmic functions ( x a log ). Secondly, we investigate the conditions of data sets under which a particular logarithmic specification is superior to the usual regression specification. This analysis shows that logarithmic specification superiority in terms of least square norm is heavily dependent on the available data set. The last section deals with economists’ “curve fitting fetishism”. We propose that a distinction be made between curve fitting over past observations and the development of a theoretical or empirical law capable of maintaining its fitting power for any future observations. Finally we conclude this paper with several epistemological issues in relation to dimensions and curve fitting practice in economics
Resumo:
We investigate the determinants of teamwork and workers cooperation within the firm. Up to now the literature has almost exclusively focused on workers incentives as the main determinants for workers cooperation. We take a broader look at the firm's organizational design and analyze the impact that different aspects of it might have on cooperation. In particular, we consider the way in which the degree of decentralization of decisions and the use of complementary HRM practices (what we call the .rm.s vertical organizational design) can affect workers'collaboration with each other. We test the model's predictions on a unique dataset on Spanish small and medium size firms containing a rich set of variables that allows us to use sensible proxies for workers cooperation. We find that the decentralization of labor decisions (and to a less extent that of task planning) has a positive impact on workers cooperation. Likewise, cooperation is positively correlated to many of the HRM practices that seem to favor workers'interaction the most. We also confirm the previous finding that collaborative efforts respond positively to pay incentives, and particularly, to group or company incentives.
Resumo:
The biplot has proved to be a powerful descriptive and analytical tool in many areasof applications of statistics. For compositional data the necessary theoreticaladaptation has been provided, with illustrative applications, by Aitchison (1990) andAitchison and Greenacre (2002). These papers were restricted to the interpretation ofsimple compositional data sets. In many situations the problem has to be described insome form of conditional modelling. For example, in a clinical trial where interest isin how patients’ steroid metabolite compositions may change as a result of differenttreatment regimes, interest is in relating the compositions after treatment to thecompositions before treatment and the nature of the treatments applied. To study thisthrough a biplot technique requires the development of some form of conditionalcompositional biplot. This is the purpose of this paper. We choose as a motivatingapplication an analysis of the 1992 US President ial Election, where interest may be inhow the three-part composition, the percentage division among the three candidates -Bush, Clinton and Perot - of the presidential vote in each state, depends on the ethniccomposition and on the urban-rural composition of the state. The methodology ofconditional compositional biplots is first developed and a detailed interpretation of the1992 US Presidential Election provided. We use a second application involving theconditional variability of tektite mineral compositions with respect to major oxidecompositions to demonstrate some hazards of simplistic interpretation of biplots.Finally we conjecture on further possible applications of conditional compositionalbiplots
Resumo:
We put together the different conceptual issues involved in measuring inequality of opportunity, discuss how these concepts have been translated into computable measures, and point out the problems and choices researchers face when implementing these measures. Our analysis identifies and suggests several new possibilities to measure inequality of opportunity. The approaches are illustrated with a selective survey of the empirical literature on income inequality of opportunity.
Resumo:
Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper
Resumo:
This paper presents and estimates a dynamic choice model in the attribute space considering rational consumers. In light of the evidence of several state-dependence patterns, the standard attribute-based model is extended by considering a general utility function where pure inertia and pure variety-seeking behaviors can be explained in the model as particular linear cases. The dynamics of the model are fully characterized by standard dynamic programming techniques. The model presents a stationary consumption pattern that can be inertial, where the consumer only buys one product, or a variety-seeking one, where the consumer shifts among varied products.We run some simulations to analyze the consumption paths out of the steady state. Underthe hybrid utility assumption, the consumer behaves inertially among the unfamiliar brandsfor several periods, eventually switching to a variety-seeking behavior when the stationary levels are approached. An empirical analysis is run using scanner databases for three different product categories: fabric softener, saltine cracker, and catsup. Non-linear specifications provide the best fit of the data, as hybrid functional forms are found in all the product categories for most attributes and segments. These results reveal the statistical superiority of the non-linear structure and confirm the gradual trend to seek variety as the level of familiarity with the purchased items increases.
Resumo:
This paper presents findings from a study investigating a firm s ethical practices along the value chain. In so doing we attempt to better understand potential relationships between a firm s ethical stance with its customers and those of its suppliers within a supply chain and identify particular sectoral and cultural influences that might impinge on this. Drawing upon a database comprising of 667 industrial firms from 27 different countries, we found that ethical practices begin with the firm s relationship with its customers, the characteristics of which then influence the ethical stance with the firm s suppliers within the supply chain. Importantly, market structure along with some key cultural characteristics were also found to exert significant influence on the implementation of ethical policies in these firms.
Resumo:
Terrestrial laser scanning (TLS) is one of the most promising surveying techniques for rockslope characterization and monitoring. Landslide and rockfall movements can be detected by means of comparison of sequential scans. One of the most pressing challenges of natural hazards is combined temporal and spatial prediction of rockfall. An outdoor experiment was performed to ascertain whether the TLS instrumental error is small enough to enable detection of precursory displacements of millimetric magnitude. This consists of a known displacement of three objects relative to a stable surface. Results show that millimetric changes cannot be detected by the analysis of the unprocessed datasets. Displacement measurement are improved considerably by applying Nearest Neighbour (NN) averaging, which reduces the error (1¿) up to a factor of 6. This technique was applied to displacements prior to the April 2007 rockfall event at Castellfollit de la Roca, Spain. The maximum precursory displacement measured was 45 mm, approximately 2.5 times the standard deviation of the model comparison, hampering the distinction between actual displacement and instrumental error using conventional methodologies. Encouragingly, the precursory displacement was clearly detected by applying the NN averaging method. These results show that millimetric displacements prior to failure can be detected using TLS.
Resumo:
This article summarizes the basic principles of photoelectron spectroscopy for surface analysis, with examples of applications in material science that illustrate the capabilities of the related techniques.