877 resultados para Information Visualization Environment
Resumo:
Wider economic benefits resulting from extended geographical mobility is one argument for investments in high-speed rail. More specifically, the argument for high-speed trains in Sweden has been that they can help to further spatially extend labor market regions which in turn has a positive effect on growth and development. In this paper the aim is to cartographically visualize the potential size of the labor markets in areas that could be affected by possible future high-speed trains. The visualization is based on the forecasts of labor mobility with public transport made by the Swedish national mobility transport forecasting tool, SAMPERS, for two alternative high-speed rail scenarios. The analysis, not surprisingly, suggests that the largest impact of high-speed trains results in the area where the future high speed rail tracks are planned to be built. This expected effect on local labor market regions of high-speed trains could mean that possible regional economic development effects also are to be expected in this area. However, the results, in general, from the SAMPERS forecasts indicaterelatively small increases in local labor market potentials.
Resumo:
This thesis consists of a summary and four self-contained papers. Paper [I] Following the 1987 report by The World Commission on Environment and Development, the genuine saving has come to play a key role in the context of sustainable development, and the World Bank regularly publishes numbers for genuine saving on a national basis. However, these numbers are typically calculated as if the tax system is non-distortionary. This paper presents an analogue to genuine saving in a second best economy, where the government raises revenue by means of distortionary taxation. We show how the social cost of public debt, which depends on the marginal excess burden, ought to be reflected in the genuine saving. We also illustrate by presenting calculations for Greece, Japan, Portugal, U.K., U.S. and OECD average, showing that the numbers published by the World Bank are likely to be biased and may even give incorrect information as to whether the economy is locally sustainable. Paper [II] This paper examines the relationships among per capita CO2 emissions, per capita GDP and international trade based on panel data spanning the period 1960-2008 for 150 countries. A distinction is also made between OECD and Non-OECD countries to capture the differences of this relationship between developed and developing economies. We apply panel unit root and cointegration tests, and estimate a panel error correction model. The results from the error correction model suggest that there are long-term relationships between the variables for the whole sample and for Non-OECD countries. Finally, Granger causality tests show that there is bi-directional short-term causality between per capita GDP and international trade for the whole sample and between per capita GDP and CO2 emissions for OECD countries. Paper [III] Fundamental questions in economics are why some regions are richer than others, why their growth rates differ, whether their growth rates tend to converge, and what key factors contribute to explain economic growth. This paper deals with the average income growth, net migration, and changes in unemployment rates at the municipal level in Sweden. The aim is to explore in depth the effects of possible underlying determinants with a particular focus on local policy variables. The analysis is based on a three-equation model. Our results show, among other things, that increases in the local public expenditure and income taxe rate have negative effects on subsequent income income growth. In addition, the results show conditional convergence, i.e. that the average income among the municipal residents tends to grow more rapidly in relatively poor local jurisdictions than in initially “richer” jurisdictions, conditional on the other explanatory variables. Paper [IV] This paper explores the relationship between income growth and income inequality using data at the municipal level in Sweden for the period 1992-2007. We estimate a fixed effects panel data growth model, where the within-municipality income inequality is one of the explanatory variables. Different inequality measures (Gini coefficient, top income shares, and measures of inequality in the lower and upper part of the income distribution) are examined. We find a positive and significant relationship between income growth and income inequality measured as the Gini coefficient and top income shares, respectively. In addition, while inequality in the upper part of the income distribution is positively associated with the income growth rate, inequality in the lower part of the income distribution seems to be negatively related to the income growth. Our findings also suggest that increased income inequality enhances growth more in municipalities with a high level of average income than in municipalities with a low level of average income.
Resumo:
Background There is emerging evidence that the physical environment is important for health, quality of life and care, but there is a lack of valid instruments to assess health care environments. The Sheffield Care Environment Assessment Matrix (SCEAM), developed in the United Kingdom, provides a comprehensive assessment of the physical environment of residential care facilities for older people. This paper reports on the translation and adaptation of SCEAM for use in Swedish residential care facilities for older people, including information on its validity and reliability. Methods SCEAM was translated into Swedish and back-translated into English, and assessed for its relevance by experts using content validity index (CVI) together with qualitative data. After modification, the validity assessments were repeated and followed by test-retest and inter-rater reliability tests in six units within a Swedish residential care facility that varied in terms of their environmental characteristics. Results Translation and back translation identified linguistic and semantic related issues. The results of the first content validity analysis showed that more than one third of the items had item-CVI (I-CVI) values less than the critical value of 0.78. After modifying the instrument, the second content validation analysis resulted in I-CVI scores above 0.78, the suggested criteria for excellent content validity. Test-retest reliability showed high stability (96% and 95% for two independent raters respectively), and inter-rater reliability demonstrated high levels of agreement (95% and 94% on two separate rating occasions). Kappa values were very good for test-retest (κ= 0.903 and 0.869) and inter-rater reliability (κ= 0.851 and 0.832). Conclusions Adapting an instrument to a domestic context is a complex and time-consuming process, requiring an understanding of the culture where the instrument was developed and where it is to be used. A team, including the instrument’s developers, translators, and researchers is necessary to ensure a valid translation and adaption. This study showed preliminary validity and reliability evidence for the Swedish version (S-SCEAM) when used in a Swedish context. Further, we believe that the S-SCEAM has improved compared to the original instrument and suggest that it can be used as a foundation for future developments of the SCEAM model.
Resumo:
A challenge for the clinical management of advanced Parkinson’s disease (PD) patients is the emergence of fluctuations in motor performance, which represents a significant source of disability during activities of daily living of the patients. There is a lack of objective measurement of treatment effects for in-clinic and at-home use that can provide an overview of the treatment response. The objective of this paper was to develop a method for objective quantification of advanced PD motor symptoms related to off episodes and peak dose dyskinesia, using spiral data gathered by a touch screen telemetry device. More specifically, the aim was to objectively characterize motor symptoms (bradykinesia and dyskinesia), to help in automating the process of visual interpretation of movement anomalies in spirals as rated by movement disorder specialists. Digitized upper limb movement data of 65 advanced PD patients and 10 healthy (HE) subjects were recorded as they performed spiral drawing tasks on a touch screen device in their home environment settings. Several spatiotemporal features were extracted from the time series and used as inputs to machine learning methods. The methods were validated against ratings on animated spirals scored by four movement disorder specialists who visually assessed a set of kinematic features and the motor symptom. The ability of the method to discriminate between PD patients and HE subjects and the test-retest reliability of the computed scores were also evaluated. Computed scores correlated well with mean visual ratings of individual kinematic features. The best performing classifier (Multilayer Perceptron) classified the motor symptom (bradykinesia or dyskinesia) with an accuracy of 84% and area under the receiver operating characteristics curve of 0.86 in relation to visual classifications of the raters. In addition, the method provided high discriminating power when distinguishing between PD patients and HE subjects as well as had good test-retest reliability. This study demonstrated the potential of using digital spiral analysis for objective quantification of PD-specific and/or treatment-induced motor symptoms.
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.
Resumo:
A contextualização está em torno de atividades voltadas ao planejamento em função da organização se estruturar sobre bases tecnológicas e sistemicas para gerir os seus processos. A teoria sistêmica aplicada à organização gera a necessidade eminente de integração. Este processo recorre à necessidade de planos estratégicos, onde as definições e metodologias aplicadas devem seguir-se à disciplina focada aos processos que requerem decisão e execução na busca de metas definidas. Neste aspecto, a implantação de sistemas de informação como ferramentas auxiliares aos processos decisórios, eXigem uma sistemática ou metodologia para visualização dos resultados e uso dos mesmos como retorno ao processo de alimentação dos dados, geradores de novas conjecturas decisórias. É necessário o conhecimento básico da organização, o contexto em que a mesma se enquadra, e de que forma deveremos atuar em seu desenvolvimento Para esta adota-se etapas onde analisamos o empreendimento, políticas ou diretrizes organizacionais, a metodologia de implantação sistemicas, associada às reais necessidades da empresa, bem como sua infraestrutura e capacidade de esforço
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
This paper develops a methodology for testing the term structure of volatility forecasts derived from stochastic volatility models, and implements it to analyze models of S&P500 index volatility. U sing measurements of the ability of volatility models to hedge and value term structure dependent option positions, we fmd that hedging tests support the Black-Scholes delta and gamma hedges, but not the simple vega hedge when there is no model of the term structure of volatility. With various models, it is difficult to improve on a simple gamma hedge assuming constant volatility. Ofthe volatility models, the GARCH components estimate of term structure is preferred. Valuation tests indicate that all the models contain term structure information not incorporated in market prices.
Resumo:
O Mercado Acionário Americano evoluiu rapidamente na última década. Este tornou-se uma arquitetura aberta em que participantes com tecnologia inovadora podem competir de forma eficaz. Várias mudanças regulatórias e inovações tecnológicas permitiram mudanças profundas na estrutura do mercado. Essas mudanças, junto com o desenvolvimento tecnológico de redes de alta velocidade, agiu como um catalisador, dando origem a uma nova forma de negociação, denominada Negociação em Alta Frequência (HFT). As empresas de HFT surgiram e se apropriaram em larga escala do negócio de formação de mercado, no fornecimento de liquidez. Embora HFT tem crescido massivamente, ao longo dos últimos quatro anos, HFT perdeu rentabilidade significativamente, uma vez que mais empresas aderiram ao setor reduzindo as margens. Portanto, diante deste contexto, esta tese buscou apresentar uma breve revisão sobre a atividade de HFT, seguida de uma análise dos limites deste setor, bem como, das características do macroambiente do HFT. Para tanto, a tese realizou uma extensa revisão do histórico literário, documentos públicos qualitativos, tais como, jornais, atas de reunião e relatórios oficiais. A tese empregou um ferramental de análise, Barreiras de Entrada e Mobilidade (Porter, 1980); Modelos de Evolução Setorial (McGahan, 2004); Estrutura do Setor de Informação Intensiva (Sampler, 1998), para analisar os limites do setor de HFT. Adicionalmente, empregou as ferramentas de análise, Modelos de Evolução Setorial (McGahan, 2004) e PESTEL (JOHNSON, SCHOLES, and WHITTINGTON, 2011), para analisar o setor e o contexto que envolve o negócio de HFT. A análise concluiu que as empresas que empregam HFT para atuar e competir no mercado acionário, compoem um setor independente.
Resumo:
My dissertation focuses on dynamic aspects of coordination processes such as reversibility of early actions, option to delay decisions, and learning of the environment from the observation of other people’s actions. This study proposes the use of tractable dynamic global games where players privately and passively learn about their actions’ true payoffs and are able to adjust early investment decisions to the arrival of new information to investigate the consequences of the presence of liquidity shocks to the performance of a Tobin tax as a policy intended to foster coordination success (chapter 1), and the adequacy of the use of a Tobin tax in order to reduce an economy’s vulnerability to sudden stops (chapter 2). Then, it analyzes players’ incentive to acquire costly information in a sequential decision setting (chapter 3). In chapter 1, a continuum of foreign agents decide whether to enter or not in an investment project. A fraction λ of them are hit by liquidity restrictions in a second period and are forced to withdraw early investment or precluded from investing in the interim period, depending on the actions they chose in the first period. Players not affected by the liquidity shock are able to revise early decisions. Coordination success is increasing in the aggregate investment and decreasing in the aggregate volume of capital exit. Without liquidity shocks, aggregate investment is (in a pivotal contingency) invariant to frictions like a tax on short term capitals. In this case, a Tobin tax always increases success incidence. In the presence of liquidity shocks, this invariance result no longer holds in equilibrium. A Tobin tax becomes harmful to aggregate investment, which may reduces success incidence if the economy does not benefit enough from avoiding capital reversals. It is shown that the Tobin tax that maximizes the ex-ante probability of successfully coordinated investment is decreasing in the liquidity shock. Chapter 2 studies the effects of a Tobin tax in the same setting of the global game model proposed in chapter 1, with the exception that the liquidity shock is considered stochastic, i.e, there is also aggregate uncertainty about the extension of the liquidity restrictions. It identifies conditions under which, in the unique equilibrium of the model with low probability of liquidity shocks but large dry-ups, a Tobin tax is welfare improving, helping agents to coordinate on the good outcome. The model provides a rationale for a Tobin tax on economies that are prone to sudden stops. The optimal Tobin tax tends to be larger when capital reversals are more harmful and when the fraction of agents hit by liquidity shocks is smaller. Chapter 3 focuses on information acquisition in a sequential decision game with payoff complementar- ity and information externality. When information is cheap relatively to players’ incentive to coordinate actions, only the first player chooses to process information; the second player learns about the true payoff distribution from the observation of the first player’s decision and follows her action. Miscoordination requires that both players privately precess information, which tends to happen when it is expensive and the prior knowledge about the distribution of the payoffs has a large variance.
Resumo:
GUEDES, Clediane de Araújo; FARIAS, Gabriela Belmont de. Information literacy: uma análise nas bibliotecas escolares da rede privada em Natal / RN. Revista Digital de Biblioteconomia e Ciência da Informação, Campinas, v. 4, n. 2, p. 110-133, jan./jun. 2007
Resumo:
VANTI, Nadia. Links hipertextuais na comunicação científica: análise webométrica dos sítios acadêmicos latino-americanos em Ciências Sociais. Porto Alegre, 2007. 292 f. Tese (Doutorado em Comunicação e Informação) – Universidade Federal do Rio Grande do Sul. Porto Alegre, 2007.
Resumo:
SOUZA, Rodrigo B. ; MEDEIROS, Adelardo A. D. ; NASCIMENTO, João Maria A. ; GOMES, Heitor P. ; MAITELLI, André L. A Proposal to the Supervision of Processes in an Industrial Environment with Heterogeneous Systems. In: INTERNATIONAL CONFERENCE OF THE IEEEOF THE INDUSTRUI ELECTRONICS SOCIETY,32., Paris, 2006, Paris. Anais... Paris: IECON, 2006
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The increasing competitiveness of the construction industry, set in an economic environment in which the offer is now greater than the demand , causes the prices of many products and services, are strongly influenced by the processes of production and the final consumer. Thus, to become more competitive in the market and construction companies are seeking new alternatives to reduce and control costs, production processes and tools that allow for close monitoring of the construction schedule, with the consequent compliance deadline with the client. Based on this scenario, the creation of control tools, service management and planning work emerges as an investment opportunity and an area that can promote great benefits to construction companies. The goal of this work is to present a system of planning, service management and costs control that through worksheets provide information relating to the production phase of the work, allowing the visualization of possible irregularities in the planning and cost of the enterprise, enabling the company to take steps to achieve the goals of the enterprise in question, and correct them when necessary. The developed system has been used in a piece of real estate in Rio Grande do Norte, and the results showed that its use together allowed the construction company to accompany their results and take corrective and preventive actions during the production process, efficiently and effective