885 resultados para Context data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The astrophysical context in which this thesis project lies concerns the comprehension of the mutual interaction between the accretion onto a Super Massive Black Hole (SMBH) and the Star Formation (SF), that take place in the host galaxy. This is one of the key topic of the modern extragalactic astrophysical research. Indeed, it is widely accepted that to understand the physics of a galaxy, the contribution of a possible central AGN must be taken into account. The aim of this thesis is the study of the physical processes of the nearby Seyfert galaxy NGC 34. This source was selected because of the wide collection of multiwavelength data available in the literature. In addition, recently, it has been observed with the Atacama Large Submillimeter/Millimeter Array (ALMA) in Band 9. This project is divided in two main parts: first of all, we reduced and analyzed the ALMA data, obtaining the continuum and CO(6-5) maps; then, we looked for a coherent explaination of NGC 34 physical characteristics. In particular, we focused on the ISM physics, in order to understand its properties in terms of density, chemical composition and dominant radiation field (SF or accretion). This work has been done through the analysis of the spectral distribution of several CO transitions as a function of the transition number (CO SLED), obtained joining the CO(6-5) line with other transitions available in the literature. More precisely, the observed CO SLED has been compared with ISM models, including Photo-Dissociation Regions (PDRs) and X-ray-Dominated Regions (XDRs). These models have been obtained through the state-of-the-art photoionization code CLOUDY. Along with the observed CO SLED, we have taken into account other physical properties of NGC 34, such as the Star Formation Rate (SFR), the gas mass and the X-ray luminosity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hotel chains have access to a treasure trove of “big data” on individual hotels’ monthly electricity and water consumption. Benchmarked comparisons of hotels within a specific chain create the opportunity to cost-effectively improve the environmental performance of specific hotels. This paper describes a simple approach for using such data to achieve the joint goals of reducing operating expenditure and achieving broad sustainability goals. In recent years, energy economists have used such “big data” to generate insights about the energy consumption of the residential, commercial, and industrial sectors. Lessons from these studies are directly applicable for the hotel sector. A hotel’s administrative data provide a “laboratory” for conducting random control trials to establish what works in enhancing hotel energy efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: In the mid-1990s when I worked for a telecommunications giant I struggled to gain access to basic geodemographic data. It cost hundreds of thousands of dollars at the time to simply purchase a tile of satellite imagery from Marconi, and it was often cheaper to create my own maps using a digitizer and A0 paper maps. Everything from granular administrative boundaries to right-of-ways to points of interest and geocoding capabilities were either unavailable for the places I was working in throughout Asia or very limited. The control of this data was either in a government’s census and statistical bureau or was created by a handful of forward thinking corporations. Twenty years on we find ourselves inundated with data (location and other) that we are challenged to amalgamate, and much of it still “dirty” in nature. Open data initiatives such as ODI give us great hope for how we might be able to share information together and capitalize not only in the crowdsourcing behavior but in the implications for positive usage for the environment and for the advancement of humanity. We are already gathering and amassing a great deal of data and insight through excellent citizen science participatory projects across the globe. In early 2015, I delivered a keynote at the Data Made Me Do It conference at UC Berkeley, and in the preceding year an invited talk at the inaugural QSymposium. In gathering research for these presentations, I began to ponder on the effect that social machines (in effect, autonomous data collection subjects and objects) might have on social behaviors. I focused on studying the problem of data from various veillance perspectives, with an emphasis on the shortcomings of uberveillance which included the potential for misinformation, misinterpretation, and information manipulation when context was entirely missing. As we build advanced systems that rely almost entirely on social machines, we need to ponder on the risks associated with following a purely technocratic approach where machines devoid of intelligence may one day dictate what humans do at the fundamental praxis level. What might be the fallout of uberveillance? Bio: Dr Katina Michael is a professor in the School of Computing and Information Technology at the University of Wollongong. She presently holds the position of Associate Dean – International in the Faculty of Engineering and Information Sciences. Katina is the IEEE Technology and Society Magazine editor-in-chief, and IEEE Consumer Electronics Magazine senior editor. Since 2008 she has been a board member of the Australian Privacy Foundation, and until recently was the Vice-Chair. Michael researches on the socio-ethical implications of emerging technologies with an emphasis on an all-hazards approach to national security. She has written and edited six books, guest edited numerous special issue journals on themes related to radio-frequency identification (RFID) tags, supply chain management, location-based services, innovation and surveillance/ uberveillance for Proceedings of the IEEE, Computer and IEEE Potentials. Prior to academia, Katina worked for Nortel Networks as a senior network engineer in Asia, and also in information systems for OTIS and Andersen Consulting. She holds cross-disciplinary qualifications in technology and law.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study discusses the importance of establishing trust in post-acquisition integration context and how the use of e-channels facilitates or inhibits this process. The objective of this study is to analyze how the use of electronic communication channels influences the post-acquisition integration process in terms of trust establishment and overall integration efficiency, developing a framework as a result. Three sub-objectives are introduced: to find out the building blocks of trust in M&A’s, to analyse how the use of e-channels influence the process of trust establishment in post-acquisition integration context, and to define the consequences trust and use of e-channels have for the process. The theoretical background of the study includes literature and theories relating to trust establishment in post-acquisition integration context and how the use of e-channels influences the process of trust development on a general level. The empirical research is conducted as a single case study, based on key informant interviews. The interview data was collected between October 2015 and January 2016. Altogether nine interviews were realised; six with representatives from the acquiring firm and three with target firm members. Thematic analysis was selected as the main method for analysing and processing the qualitative data. This study finds that trust has an essential role in post-acquisition integration context, facilitating the integration process in various different ways. Hence, identifying the different building blocks of trust is important in order for members of the organisations to be better able establish and maintain trust. In today’s international business, the role of electronic communication channels has also increased in importance significantly and it was confirmed that these pose both challenges and possibilities for the development of interpersonal trust. One of the most important underlying factors influencing the trust levels via e-communication channels is the level of user’s comfort in using the different e-channels. Without sufficient and meaningful training, the communication conducted via these channels in inhibited in a number of ways. Hence, understanding the defining characteristics of e-communication together with the risks and opportunities related to the use of these can have far-reaching consequences for the post-acquisition integration process as a whole. The framework based on the findings and existing theory introduces the most central factors influencing the trust establishment together with the positive and negative consequences these have for the integration process. Moreover, organizational level consistency and the existence of shared guidelines on appropriate selection of communication channels according to the nature of the task at hand are seen as important

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of secondary data in health care research has become a very important issue over the past few years. Data from the treatment context are being used for evaluation of medical data for external quality assurance, as well as to answer medical questions in the form of registers and research databases. Additionally, the establishment of electronic clinical systems like data warehouses provides new opportunities for the secondary use of clinical data. Because health data is among the most sensitive information about an individual, the data must be safeguarded from disclosure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past decade, Mental Health (MH) has increasingly appeared on the ‘school agenda’, both in terms of rising levels of MH difficulties in the student population, and also the expectation that schools have a role to play in supporting good MH. MH is a term fraught with ambiguities leading to uncertainty around the most appropriate ways to provide support. A review of current literature reveals a wide range of definitions and interpretations, sometimes within the same team of supporting professionals. The current study seeks to explore the perspectives held by two professional groups seemingly well placed to support young persons’ (YPs’) MH. Six Clinical Psychologists (CPs) and six Educational Psychologists (EPs) are interviewed, exploring their constructs of MH, and their perceptions of their own role and the roles of others in supporting secondary school aged YPs’ MH. The data are analysed through Thematic Analysis. Findings suggest that there are variations between the two professions’ constructs of MH, and EPs in particular have no unified concept of MH. This is likely due to less experience or training in this area. CPs and EPs hold similar perceptions of the school’s role for promoting good MH, and flagging up concerns to more specialist professionals when necessary. However, there are discrepancies in the EP and CP perceptions of each other’s roles. The conflicting views appear to emerge through incomplete information about the other, and professional defensiveness in a context where resources and funding are scarce. The current study suggests that these challenges can be addressed through: greater reflectivity on professional biases, exploration of MH constructs within other epistemological positions, and greater communication regarding professional roles, leading to clearer collaboration in supporting the MH of YP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers identification of treatment effects when the outcome variables and covari-ates are not observed in the same data sets. Ecological inference models, where aggregate out-come information is combined with individual demographic information, are a common example of these situations. In this context, the counterfactual distributions and the treatment effects are not point identified. However, recent results provide bounds to partially identify causal effects. Unlike previous works, this paper adopts the selection on unobservables assumption, which means that randomization of treatment assignments is not achieved until time fixed unobserved heterogeneity is controlled for. Panel data models linear in the unobserved components are con-sidered to achieve identification. To assess the performance of these bounds, this paper provides a simulation exercise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The ultimate problem considered in this thesis is modeling a high-dimensional joint distribution over a set of discrete variables. For this purpose, we consider classes of context-specific graphical models and the main emphasis is on learning the structure of such models from data. Traditional graphical models compactly represent a joint distribution through a factorization justi ed by statements of conditional independence which are encoded by a graph structure. Context-speci c independence is a natural generalization of conditional independence that only holds in a certain context, speci ed by the conditioning variables. We introduce context-speci c generalizations of both Bayesian networks and Markov networks by including statements of context-specific independence which can be encoded as a part of the model structures. For the purpose of learning context-speci c model structures from data, we derive score functions, based on results from Bayesian statistics, by which the plausibility of a structure is assessed. To identify high-scoring structures, we construct stochastic and deterministic search algorithms designed to exploit the structural decomposition of our score functions. Numerical experiments on synthetic and real-world data show that the increased exibility of context-specific structures can more accurately emulate the dependence structure among the variables and thereby improve the predictive accuracy of the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Incidental findings on low-dose CT images obtained during hybrid imaging are an increasing phenomenon as CT technology advances. Understanding the diagnostic value of incidental findings along with the technical limitations is important when reporting image results and recommending follow-up, which may result in an additional radiation dose from further diagnostic imaging and an increase in patient anxiety. This study assessed lesions incidentally detected on CT images acquired for attenuation correction on two SPECT/CT systems. Methods: An anthropomorphic chest phantom containing simulated lesions of varying size and density was imaged on an Infinia Hawkeye 4 and a Symbia T6 using the low-dose CT settings applied for attenuation correction acquisitions in myocardial perfusion imaging. Twenty-two interpreters assessed 46 images from each SPECT/CT system (15 normal images and 31 abnormal images; 41 lesions). Data were evaluated using a jackknife alternative free-response receiver-operating-characteristic analysis (JAFROC). Results: JAFROC analysis showed a significant difference (P < 0.0001) in lesion detection, with the figures of merit being 0.599 (95% confidence interval, 0.568, 0.631) and 0.810 (95% confidence interval, 0.781, 0.839) for the Infinia Hawkeye 4 and Symbia T6, respectively. Lesion detection on the Infinia Hawkeye 4 was generally limited to larger, higher-density lesions. The Symbia T6 allowed improved detection rates for midsized lesions and some lower-density lesions. However, interpreters struggled to detect small (5 mm) lesions on both image sets, irrespective of density. Conclusion: Lesion detection is more reliable on low-dose CT images from the Symbia T6 than from the Infinia Hawkeye 4. This phantom-based study gives an indication of potential lesion detection in the clinical context as shown by two commonly used SPECT/CT systems, which may assist the clinician in determining whether further diagnostic imaging is justified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of forest re activity, in its several aspects, is essencial to understand the phenomenon and to prevent environmental public catastrophes. In this context the analysis of monthly number of res along several years is one aspect to have into account in order to better comprehend this tematic. The goal of this work is to analyze the monthly number of forest res in the neighboring districts of Aveiro and Coimbra, Portugal, through dynamic factor models for bivariate count series. We use a bayesian approach, through MCMC methods, to estimate the model parameters as well as to estimate the common latent factor to both series.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concepts of social entrepreneurship and social enterprises have been extensively discussed in the previous two decades, yet the topic is still not matured yet. Most of the available literature is focused on defining these terms. Similarly, limited number of authors has discussed the marketing function of these enterprises or how marketing is interpreted in social entrepreneurship models. However, there is a plethora of literature on marketing entailing many different theories, amongst which the newest one is the “market orientation concept”. Market orientation is a mix of customer orientation, competitor orientation and inter-functional coordination suggesting marketing to be a part of the business philosophy. This study focuses on the marketing and market orientation of social enterprises while giving an overview of the literature of marketing and market orientation in social enterprises. This study aims to provide two basic questions, 1) what is the literature on marketing and market orientation of social enterprises while explaining the literature of social enterprises in Pakistan and 2) how these concepts are interpreted in social enterprises in Pakistani market. Key features of research methodology include case study approach while conducting thematic analysis using thematic networks. The results indicate that only a limited number of authors have discussed market orientation concept in social enterprises. The results from the interview data indicate the usage of marketing by a firm unconsciously without a specific marketing department. In addition to that, it has been found that in social enterprise world competition is tackled through a win-win approach with a view that many enterprises working for society improve the society which is the basic mission of any social enterprise. The data also showed that in Pakistani market, social enterprise concept is not legally used yet, which allows for more room for innovation. This study intends to give a new perspective to the theorists to use market orientation concept in social enterprises and also to managers to use marketing as their business philosophy in order to satisfy the stakeholders for better delivery of their businesses and as well for social good.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.