895 resultados para time and movements
Resumo:
Previous research has been interpreted to suggest that the startle reflex mediates the RT facilitation observed if intense, accessory acoustic stimuli are presented coinciding with the onset of a visual imperative stimulus in a forewarned simple RT task. The present research replicated this finding as well as the facilitation of startle observed during the imperative stimulus. It failed, however, to find any relationship between the size of the blink startle reflex elicited by the accessory acoustic stimuli, which differed in intensity and rise time, and RT or RT facilitation observed on trials with accessory acoustic stimuli. This finding suggests that the RT facilitation is not mediated by the startle reflex elicited by the accessory acoustic stimuli. (c) 2006 Elsevier Ireland Ltd. All rights reserved.
Resumo:
In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).
Resumo:
Measuring Job Openings: Evidence from Swedish Plant Level Data. In modern macroeconomic models “job openings'' are a key component. Thus, when taking these models to the data we need an empirical counterpart to the theoretical concept of job openings. To achieve this, the literature relies on job vacancies measured either in survey or register data. Insofar as this concept captures the concept of job openings well we should see a tight relationship between vacancies and subsequent hires on the micro level. To investigate this, I analyze a new data set of Swedish hires and job vacancies on the plant level covering the period 2001-2012. I find that vacancies contain little power in predicting hires over and above (i) whether the number of vacancies is positive and (ii) plant size. Building on this, I propose an alternative measure of job openings in the economy. This measure (i) better predicts hiring at the plant level and (ii) provides a better fitting aggregate matching function vis-à-vis the traditional vacancy measure. Firm Level Evidence from Two Vacancy Measures. Using firm level survey and register data for both Sweden and Denmark we show systematic mis-measurement in both vacancy measures. While the register-based measure on the aggregate constitutes a quarter of the survey-based measure, the latter is not a super-set of the former. To obtain the full set of unique vacancies in these two databases, the number of survey vacancies should be multiplied by approximately 1.2. Importantly, this adjustment factor varies over time and across firm characteristics. Our findings have implications for both the search-matching literature and policy analysis based on vacancy measures: observed changes in vacancies can be an outcome of changes in mis-measurement, and are not necessarily changes in the actual number of vacancies. Swedish Unemployment Dynamics. We study the contribution of different labor market flows to business cycle variations in unemployment in the context of a dual labor market. To this end, we develop a decomposition method that allows for a distinction between permanent and temporary employment. We also allow for slow convergence to steady state which is characteristic of European labor markets. We apply the method to a new Swedish data set covering the period 1987-2012 and show that the relative contributions of inflows and outflows to/from unemployment are roughly 60/30. The remaining 10\% are due to flows not involving unemployment. Even though temporary contracts only cover 9-11\% of the working age population, variations in flows involving temporary contracts account for 44\% of the variation in unemployment. We also show that the importance of flows involving temporary contracts is likely to be understated if one does not account for non-steady state dynamics. The New Keynesian Transmission Mechanism: A Heterogeneous-Agent Perspective. We argue that a 2-agent version of the standard New Keynesian model---where a ``worker'' receives only labor income and a “capitalist'' only profit income---offers insights about how income inequality affects the monetary transmission mechanism. Under rigid prices, monetary policy affects the distribution of consumption, but it has no effect on output as workers choose not to change their hours worked in response to wage movements. In the corresponding representative-agent model, in contrast, hours do rise after a monetary policy loosening due to a wealth effect on labor supply: profits fall, thus reducing the representative worker's income. If wages are rigid too, however, the monetary transmission mechanism is active and resembles that in the corresponding representative-agent model. Here, workers are not on their labor supply curve and hence respond passively to demand, and profits are procyclical.
Resumo:
Three different stoichiometric forms of RbMn[Fe(CN) ]y·zHO [x = 0.96, y = 0.98, z = 0.75 (1); x = 0.94, y = 0.88, z = 2.17 (2); x = 0.61, y = 0.86, z = 2.71 (3)] Prussian blue analogues were synthesized and investigated by magnetic, calorimetric, Raman spectroscopic, X-ray diffraction, and Fe Mössbauer spectroscopic methods. Compounds 1 and 2 show a hysteresis loop between the high-temperature (HT) Fe(S = 1/2)-CN-Mn(S = 5/2) and the low-temperature (LT) Fe(S = 0)-CN-Mn(S = 2) forms of 61 and 135 K width centered at 273 and 215 K, respectively, whereas the third compound remains in the HT phase down to 5 K. The splitting of the quadrupolar doublets in the Fe Mössbauer spectra reveal the electron-transfer-active centers. Refinement of the X-ray powder diffraction profiles shows that electron-transfer-active materials have the majority of the Rb ions on only one of the two possible interstitial sites, whereas nonelectron-transfer-active materials have the Rb ions equally distributed. Moreover, the stability of the compounds with time and following heat treatment is also discussed. © Wiley-VCH Verlag GmbH & Co. KGaA, 2009.
Resumo:
Deep hole drilling is one of the most complicated metal cutting processes and one of the most difficult to perform on CNC machine-tools or machining centres under conditions of limited manpower or unmanned operation. This research work investigates aspects of the deep hole drilling process with small diameter twist drills and presents a prototype system for real time process monitoring and adaptive control; two main research objectives are fulfilled in particular : First objective is the experimental investigation of the mechanics of the deep hole drilling process, using twist drills without internal coolant supply, in the range of diarneters Ø 2.4 to Ø4.5 mm and working length up to 40 diameters. The definition of the problems associated with the low strength of these tools and the study of mechanisms of catastrophic failure which manifest themselves well before and along with the classic mechanism of tool wear. The relationships between drilling thrust and torque with the depth of penetration and the various machining conditions are also investigated and the experimental evidence suggests that the process is inherently unstable at depths beyond a few diameters. Second objective is the design and implementation of a system for intelligent CNC deep hole drilling, the main task of which is to ensure integrity of the process and the safety of the tool and the workpiece. This task is achieved by means of interfacing the CNC system of the machine tool to an external computer which performs the following functions: On-line monitoring of the drilling thrust and torque, adaptive control of feed rate, spindle speed and tool penetration (Z-axis), indirect monitoring of tool wear by pattern recognition of variations of the drilling thrust with cumulative cutting time and drilled depth, operation as a data base for tools and workpieces and finally issuing of alarms and diagnostic messages.
Resumo:
Despite the availability of various control techniques and project control software many construction projects still do not achieve their cost and time objectives. Research in this area so far has mainly been devoted to identifying causes of cost and time overruns. There is limited research geared towards studying factors inhibiting the ability of practitioners to effectively control their projects. To fill this gap, a survey was conducted on 250 construction project organizations in the UK, which was followed by face-to-face interviews with experienced practitioners from 15 of these organizations. The common factors that inhibit both time and cost control during construction projects were first identified. Subsequently 90 mitigating measures have been developed for the top five leading inhibiting factors—design changes, risks/uncertainties, inaccurate evaluation of project time/duration, complexities and non-performance of subcontractors were recommended. These mitigating measures were classified as: preventive, predictive, corrective and organizational measures. They can be used as a checklist of good practice and help project managers to improve the effectiveness of control of their projects.
Resumo:
Numerous techniques have been developed to control cost and time of construction projects. However, there is limited research on issues surrounding the practical usage of these techniques. To address this, a survey was conducted on the top 150 construction companies and 100 construction consultancies in the UK aimed at identifying common project control practices and factors inhibiting effective project control in practice. It found that despite the vast application of control techniques a high proportion of respondents still experienced cost and time overruns on a significant proportion of their projects. Analysis of the survey results concluded that more effort should be geared at the management of the identified top project control inhibiting factors. This paper has outlined some measures for mitigating these inhibiting factors so that the outcome of project time and cost control can be improved in practice.
Resumo:
Despite recent research on time (e.g. Hedaa & Törnroos, 2001), consideration of the time dimension in data collection, analysis and interpretation in research in supply networks is, to date, still limited. Drawing on a body of literature from organization studies, and empirical findings from a six-year action research programme and a related study of network learning, we reflect on time, timing and timeliness in interorganizational networks. The empirical setting is supply networks in the English health sector wherein we identify and elaborate various issues of time, within the case and in terms of research process. Our analysis is wide-ranging and multi-level, from the global (e.g. identifying the notion of life cycles) to the particular (e.g. different cycle times in supply, such as daily for deliveries and yearly for contracts). We discuss the ‘speeding up’ of inter-organizational ‘e’ time and tensions with other time demands. In closing the paper, we relate our conclusions to the future conduct of the research programme and supply research more generally, and to the practice of managing supply (in) networks.
Resumo:
Congenital nystagmus (CN) is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations and its pathogenesis is still under investigation. This kind of nystagmus is termed congenital (or infantile) since it could be present at birth or it can arise in the first months of life. Most of CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, the image of a given target can still be stable during short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). To quantify the extent of nystagmus, eye movement recording are routinely employed, allowing physicians to extract and analyse nystagmus main features such as waveform shape, amplitude and frequency. Using eye movement recording, it is also possible to compute estimated visual acuity predictors: analytical functions which estimates expected visual acuity using signal features such as foveation time and foveation position variability. Use of those functions extend the information from typical visual acuity measurement (e.g. Landolt C test) and could be a support for therapy planning or monitoring. This study focuses on detection of CN patients' waveform type and on foveation time measure. Specifically, it proposes a robust method to recognize cycles corresponding to the specific CN waveform in the eye movement pattern and, for those cycles, evaluate the exact signal tracts in which a subject foveates. About 40 eyemovement recordings, either infrared-oculographic or electrooculographic, were acquired from 16 CN subjects. Results suggest that the use of an adaptive threshold applied to the eye velocity signal could improve the estimation of slow phase start point. This can enhance foveation time computing and reduce influence of repositioning saccades and data noise on the waveform type identification.
Resumo:
The popularity of online social media platforms provides an unprecedented opportunity to study real-world complex networks of interactions. However, releasing this data to researchers and the public comes at the cost of potentially exposing private and sensitive user information. It has been shown that a naive anonymization of a network by removing the identity of the nodes is not sufficient to preserve users’ privacy. In order to deal with malicious attacks, k -anonymity solutions have been proposed to partially obfuscate topological information that can be used to infer nodes’ identity. In this paper, we study the problem of ensuring k anonymity in time-varying graphs, i.e., graphs with a structure that changes over time, and multi-layer graphs, i.e., graphs with multiple types of links. More specifically, we examine the case in which the attacker has access to the degree of the nodes. The goal is to generate a new graph where, given the degree of a node in each (temporal) layer of the graph, such a node remains indistinguishable from other k-1 nodes in the graph. In order to achieve this, we find the optimal partitioning of the graph nodes such that the cost of anonymizing the degree information within each group is minimum. We show that this reduces to a special case of a Generalized Assignment Problem, and we propose a simple yet effective algorithm to solve it. Finally, we introduce an iterated linear programming approach to enforce the realizability of the anonymized degree sequences. The efficacy of the method is assessed through an extensive set of experiments on synthetic and real-world graphs.
Resumo:
Prices of U.S. Treasury securities vary over time and across maturities. When the market in Treasurys is sufficiently complete and frictionless, these prices may be modeled by a function time and maturity. A cross-section of this function for time held fixed is called the yield curve; the aggregate of these sections is the evolution of the yield curve. This dissertation studies aspects of this evolution. ^ There are two complementary approaches to the study of yield curve evolution here. The first is principal components analysis; the second is wavelet analysis. In both approaches both the time and maturity variables are discretized. In principal components analysis the vectors of yield curve shifts are viewed as observations of a multivariate normal distribution. The resulting covariance matrix is diagonalized; the resulting eigenvalues and eigenvectors (the principal components) are used to draw inferences about the yield curve evolution. ^ In wavelet analysis, the vectors of shifts are resolved into hierarchies of localized fundamental shifts (wavelets) that leave specified global properties invariant (average change and duration change). The hierarchies relate to the degree of localization with movements restricted to a single maturity at the base and general movements at the apex. Second generation wavelet techniques allow better adaptation of the model to economic observables. Statistically, the wavelet approach is inherently nonparametric while the wavelets themselves are better adapted to describing a complete market. ^ Principal components analysis provides information on the dimension of the yield curve process. While there is no clear demarkation between operative factors and noise, the top six principal components pick up 99% of total interest rate variation 95% of the time. An economically justified basis of this process is hard to find; for example a simple linear model will not suffice for the first principal component and the shape of this component is nonstationary. ^ Wavelet analysis works more directly with yield curve observations than principal components analysis. In fact the complete process from bond data to multiresolution is presented, including the dedicated Perl programs and the details of the portfolio metrics and specially adapted wavelet construction. The result is more robust statistics which provide balance to the more fragile principal components analysis. ^
Resumo:
The study examines the thought of Yanagita Kunio (1875–1962), an influential Japanese nationalist thinker and a founder of an academic discipline named minzokugaku. The purpose of the study is to bring into light an unredeemed potential of his intellectual and political project as a critique of the way in which modern politics and knowledge systematically suppresses global diversity. The study reads his texts against the backdrop of the modern understanding of space and time and its political and moral implications and traces the historical evolution of his thought that culminates in the establishment of minzokugaku. My reading of Yanagita’s texts draws on three interpretive hypotheses. First, his thought can be interpreted as a critical engagement with John Stuart Mill’s philosophy of history, as he turns Mill’s defense of diversity against Mill’s justification of enlightened despotism in non-Western societies. Second, to counter Mill’s individualistic notion of progressive agency, he turns to a Marxian notion of anthropological space, in which a laboring class makes history by continuously transforming nature, and rehabilitates the common people (jomin) as progressive agents. Third, in addition to the common people, Yanagita integrates wandering people as a countervailing force to the innate parochialism and conservatism of agrarian civilization. To excavate the unrecorded history of ordinary farmers and wandering people and promote the formation of national consciousness, his minzokugaku adopts travel as an alternative method for knowledge production and political education. In light of this interpretation, the aim of Yanagita’s intellectual and political project can be understood as defense and critique of the Enlightenment tradition. Intellectually, he attempts to navigate between spurious universalism and reactionary particularism by revaluing diversity as a necessary condition for universal knowledge and human progress. Politically, his minzokugaku aims at nation-building/globalization from below by tracing back the history of a migratory process cutting across the existing boundaries. His project is opposed to nation-building from above that aims to integrate the world population into international society at the expense of global diversity.
Resumo:
Knowledge of cell electronics has led to their integration to medicine either by physically interfacing electronic devices with biological systems or by using electronics for both detection and characterization of biological materials. In this dissertation, an electrical impedance sensor (EIS) was used to measure the electrode surface impedance changes from cell samples of human and environmental toxicity of nanoscale materials in 2D and 3D cell culture models. The impedimetric response of human lung fibroblasts and rainbow trout gill epithelial cells when exposed to various nanomaterials was tested to determine their kinetic effects towards the cells and to demonstrate the biosensor's ability to monitor nanotoxicity in real-time. Further, the EIS allowed rapid, real-time and multi-sample analysis creating a versatile, noninvasive tool that is able to provide quantitative information with respect to alteration in cellular function. We then extended the application of the unique capabilities of the EIS to do real-time analysis of cancer cell response to externally applied alternating electric fields at different intermediate frequencies and low-intensity. Decreases in the growth profiles of the ovarian and breast cancer cells were observed with the application of 200 and 100 kHz, respectively, indicating specific inhibitory effects on dividing cells in culture in contrast to the non-cancerous HUVECs and mammary epithelial cells. We then sought to enhance the effects of the electric field by altering the cancer cell's electronegative membrane properties with HER2 antibody functionalized nanoparticles. An Annexin V/EthD-III assay and zeta potential were performed to determine the cell death mechanism indicating apoptosis and a decrease in zeta potential with the incorporation of the nanoparticles. With more negatively charged HER2-AuNPs attached to the cancer cell membrane, the decrease in membrane potential would thus leave the cells more vulnerable to the detrimental effects of the applied electric field due to the decrease in surface charge. Therefore, by altering the cell membrane potential, one could possibly control the fate of the cell. This whole cell-based biosensor will enhance our understanding of the responsiveness of cancer cells to electric field therapy and demonstrate potential therapeutic opportunities for electric field therapy in the treatment of cancer.
Resumo:
Construction projects are complex endeavors that require the involvement of different professional disciplines in order to meet various project objectives that are often conflicting. The level of complexity and the multi-objective nature of construction projects lend themselves to collaborative design and construction such as integrated project delivery (IPD), in which relevant disciplines work together during project conception, design and construction. Traditionally, the main objectives of construction projects have been to build in the least amount of time with the lowest cost possible, thus the inherent and well-established relationship between cost and time has been the focus of many studies. The importance of being able to effectively model relationships among multiple objectives in building construction has been emphasized in a wide range of research. In general, the trade-off relationship between time and cost is well understood and there is ample research on the subject. However, despite sustainable building designs, relationships between time and environmental impact, as well as cost and environmental impact, have not been fully investigated. The objectives of this research were mainly to analyze and identify relationships of time, cost, and environmental impact, in terms of CO2 emissions, at different levels of a building: material level, component level, and building level, at the pre-use phase, including manufacturing and construction, and the relationships of life cycle cost and life cycle CO2 emissions at the usage phase. Additionally, this research aimed to develop a robust simulation-based multi-objective decision-support tool, called SimulEICon, which took construction data uncertainty into account, and was capable of incorporating life cycle assessment information to the decision-making process. The findings of this research supported the trade-off relationship between time and cost at different building levels. Moreover, the time and CO2 emissions relationship presented trade-off behavior at the pre-use phase. The results of the relationship between cost and CO2 emissions were interestingly proportional at the pre-use phase. The same pattern continually presented after the construction to the usage phase. Understanding the relationships between those objectives is a key in successfully planning and designing environmentally sustainable construction projects.
Resumo:
This study examines children’s temporal ways of knowing and it highlights the centrality of temporal cognition in the development of children’s historical understanding. It explores how young children conceptualise time and it examines the provision for temporal cognition at the levels of the intended, enacted and received history curriculum in the Irish primary school context. Positioning temporality as a prerequisite second-order concept, the study recognises the essential role of both first-order and additional second-order concepts in historical understanding. While the former can be defined as the basic, substantive content to be taught, the latter refers to a number of additional key concepts that are deemed fundamental to children's capacity to make meaningful sense of history. The study argues for due recognition to be given to temporality, in the belief that both sets of knowledge, the content and skills, are required to develop historical thinking (Lévesque, 2011). The study addresses a number of key research questions, using a mixed methods research design, comprising an analysis of history textbooks, a survey among final year student teachers about their teaching of history, and school-based interviews with primary school children: What opportunities are available for children to develop temporal ways of knowing? How do student teachers experience being apprenticed into the available culture for teaching history and understanding temporality at primary level? What insights do the cognitive-developmental and sociocultural perspectives on learning provide for understanding the dynamics of children’s temporal ways of knowing? The study argues that the skill of developing a deeper understanding of time is a key prerequisite in connecting with, and constructing, understandings and frameworks of the past. The study advances a view of temporality as complex, multi-faceted and developmental. The findings have a potential contribution to make in influencing policy and pedagogy in establishing an elaborated and well-defined curriculum framework for developing temporal cognition at both national and international levels.