903 resultados para Data-driven Methods
Resumo:
In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).
Resumo:
The point of departure for this study was a recognition of the differences in suppliers' and acquirers' judgements of the value of technology when transferred between the two, and the significant impacts of technology valuation on the establishment of technology partnerships and effectiveness of technology collaborations. The perceptions, transfer strategies and objectives, perceived benefits and assessed technology contributions as well as associated costs and risks of both suppliers and acquirers were seen to be the core to these differences. This study hypothesised that the capability embodied in technology to yield future returns makes technology valuation distinct from the process of valuing manufacturing products. The study hence has gone beyond the dimensions of cost calculation and price determination that have been discussed in the existing literature, by taking a broader view of how to achieve and share future added value from transferred technology. The core of technology valuation was argued as the evaluation of the 'quality' of the capability (technology) in generating future value and the effectiveness of the transfer arrangement for best use of such a capability. A dynamic approach comprising future value generation and realisation within the context of specific forms of collaboration was therefore adopted. The research investigations focused on the UK and China machine tool industries, where there are many technology transfer activities and the value issue has already been recognised in practice. Data were gathered from three groups: machine tool manufacturing technology suppliers in the UK and acquirers in China, and machine tool users in China. Data collecting methods included questionnaire surveys and case studies within all the three groups. The study has focused on identifying and examining the major factors affecting value as well as their interactive effects on technology valuation from both the supplier's and acquirer's point of view. The survey results showed the perceptions and the assessments of the owner's value and transfer value from the supplier's and acquirer's point of view respectively. Benefits, costs and risks related to the technology transfer were the major factors affecting the value of technology. The impacts of transfer payment on the value of technology by the sharing of financial benefits, costs and risks between partners were assessed. The close relationship between technology valuation and transfer arrangements was established by which technical requirements and strategic implications were considered. The case studies reflected the research propositions and revealed that benefits, costs and risks in the financial, technical and strategic dimensions interacted in the process of technology valuation within the context of technology collaboration. Further to the assessment of factors affecting value, a technology valuation framework was developed which suggests that technology attributes for the enhancement of contributory factors and their contributions to the realisation of transfer objectives need to be measured and compared with the associated costs and risks. The study concluded that technology valuation is a dynamic process including the generation and sharing of future value and the interactions between financial, technical and strategic achievements.
Resumo:
A graphical process control language has been developed as a means of defining process control software. The user configures a block diagram describing the required control system, from a menu of functional blocks, using a graphics software system with graphics terminal. Additions may be made to the menu of functional blocks, to extend the system capability, and a group of blocks may be defined as a composite block. This latter feature provides for segmentation of the overall system diagram and the repeated use of the same group of blocks within the system. The completed diagram is analyzed by a graphics compiler which generates the programs and data structure to realise the run-time software. The run-time software has been designed as a data-driven system which allows for modifications at the run-time level in both parameters and system configuration. Data structures have been specified to ensure efficient execution and minimal storage requirements in the final control software. Machine independence has been accomodated as far as possible using CORAL 66 as the high level language throughout the entire system; the final run-time code being generated by a CORAL 66 compiler appropriate to the target processor.
Resumo:
This book is aimed primarily at microbiologists who are undertaking research and who require a basic knowledge of statistics to analyse their experimental data. Computer software employing a wide range of data analysis methods is widely available to experimental scientists. The availability of this software, however, makes it essential that investigators understand the basic principles of statistics. Statistical analysis of data can be complex with many different methods of approach, each of which applies in a particular experimental circumstance. Hence, it is possible to apply an incorrect statistical method to data and to draw the wrong conclusions from an experiment. The purpose of this book, which has its origin in a series of articles published in the Society for Applied Microbiology journal ‘The Microbiologist’, is an attempt to present the basic logic of statistics as clearly as possible and therefore, to dispel some of the myths that often surround the subject. The 28 ‘Statnotes’ deal with various topics that are likely to be encountered, including the nature of variables, the comparison of means of two or more groups, non-parametric statistics, analysis of variance, correlating variables, and more complex methods such as multiple linear regression and principal components analysis. In each case, the relevant statistical method is illustrated with examples drawn from experiments in microbiological research. The text incorporates a glossary of the most commonly used statistical terms and there are two appendices designed to aid the investigator in the selection of the most appropriate test.
Resumo:
Excessive consumption of dietary fat is acknowledged to be a widespread problem linked to a range of medical conditions. Despite this, little is known about the specific sensory appeal held by fats and no previous published research exists concerning human perception of non-textural taste qualities in fats. This research aimed to address whether a taste component can be found in sensory perception of pure fats. It also examined whether individual differences existed in human taste responses to fat, using both aggregated data analysis methods and multidimensional scaling. Results indicated that individuals were able to detect both the primary taste qualities of sweet, salty, sour and bitter in pure processed oils and reliably ascribe their own individually-generated taste labels, suggested that a taste component may be present in human responses to fat. Individual variation appeared to exist, both in the perception of given taste qualities and in perceived intensity and preferences. A number of factors were examined in relation to such individual differences in taste perception, including age, gender, genetic sensitivity to 6-n-propylthiouracil, body mass, dietary preferences and intake, dieting behaviours and restraint. Results revealed that, to varying extents, gender, age, sensitivity to 6-n-propylthiouracil, dietary preferences, habitual dietary intake and restraint all appeared to be related to individual variation in taste responses to fat. However, in general, these differences appeared to exist in the form of differing preferences and levels of intensity with which taste qualities detected in fat were perceived, as opposed to the perception of specific taste qualities being associated with given traits or states. Equally, each of these factors appeared to exert only a limited influence upon variation in sensory responses and thus the potential for using taste responses to fats as a marker for issues such as over-consumption, obesity or eating disorder is at present limited.
Resumo:
In analysing manufacturing systems, for either design or operational reasons, failure to account for the potentially significant dynamics could produce invalid results. There are many analysis techniques that can be used, however, simulation is unique in its ability to assess detailed, dynamic behaviour. The use of simulation to analyse manufacturing systems would therefore seem appropriate if not essential. Many simulation software products are available but their ease of use and scope of application vary greatly. This is illustrated at one extreme by simulators which offer rapid but limited application whilst at the other simulation languages which are extremely flexible but tedious to code. Given that a typical manufacturing engineer does not posses in depth programming and simulation skills then the use of simulators over simulation languages would seem a more appropriate choice. Whilst simulators offer ease of use their limited functionality may preclude their use in many applications. The construction of current simulators makes it difficult to amend or extend the functionality of the system to meet new challenges. Some simulators could even become obsolete as users, demand modelling functionality that reflects the latest manufacturing system design and operation concepts. This thesis examines the deficiencies in current simulation tools and considers whether they can be overcome by the application of object-oriented principles. Object-oriented techniques have gained in popularity in recent years and are seen as having the potential to overcome any of the problems traditionally associated with software construction. There are a number of key concepts that are exploited in the work described in this thesis: the use of object-oriented techniques to act as a framework for abstracting engineering concepts into a simulation tool and the ability to reuse and extend object-oriented software. It is argued that current object-oriented simulation tools are deficient and that in designing such tools, object -oriented techniques should be used not just for the creation of individual simulation objects but for the creation of the complete software. This results in the ability to construct an easy to use simulator that is not limited by its initial functionality. The thesis presents the design of an object-oriented data driven simulator which can be freely extended. Discussion and work is focused on discrete parts manufacture. The system developed retains the ease of use typical of data driven simulators. Whilst removing any limitation on its potential range of applications. Reference is given to additions made to the simulator by other developers not involved in the original software development. Particular emphasis is put on the requirements of the manufacturing engineer and the need for Ihe engineer to carrv out dynamic evaluations.
Resumo:
Many tests of financial contagion require a definition of the dates separating calm from crisis periods. We propose to use a battery of break search procedures for individual time series to objectively identify potential break dates in relationships between countries. Applied to the biggest European stock markets and combined with two well established tests for financial contagion, this approach results in break dates which correctly identify the timing of changes in cross-country transmission mechanisms. Application of break search procedures breathes new life into the established contagion tests, allowing for an objective, data-driven timing of crisis periods.
Resumo:
This paper demonstrates that the conventional approach of using official liberalisation dates as the only existing breakdates could lead to inaccurate conclusions as to the effect of the underlying liberalisation policies. It also proposes an alternative paradigm for obtaining more robust estimates of volatility changes around official liberalisation dates and/or other important market events. By focusing on five East Asian emerging markets, all of which liberalised their financial markets in the late, and by using recent advances in the econometrics of structural change, it shows that (i) the detected breakdates in the volatility of stock market returns can be dramatically different to official liberalisation dates and (ii) the use of official liberalisation dates as breakdates can readily entail inaccurate inference. In contrast, the use of data-driven techniques for the detection of multiple structural changes leads to a richer and inevitably more accurate pattern of volatility evolution emerges in comparison with focussing on official liberalisation dates.
Resumo:
This paper investigates whether the non-normality typically observed in daily stock-market returns could arise because of the joint existence of breaks and GARCH effects. It proposes a data-driven procedure to credibly identify the number and timing of breaks and applies it on the benchmark stock-market indices of 27 OECD countries. The findings suggest that a substantial element of the observed deviations from normality might indeed be due to the co-existence of breaks and GARCH effects. However, the presence of structural changes is found to be the primary reason for the non-normality and not the GARCH effects. Also, there is still some remaining excess kurtosis that is unlikely to be linked to the specification of the conditional volatility or the presence of breaks. Finally, an interesting sideline result implies that GARCH models have limited capacity in forecasting stock-market volatility.
Resumo:
Failure to detect or account for structural changes in economic modelling can lead to misleading policy inferences, which can be perilous, especially for the more fragile economies of developing countries. Using three potential monetary policy instruments (Money Base, M0, and Reserve Money) for 13 member-states of the CFA Franc zone over the period 1989:11-2002:09, we investigate the magnitude of information extracted by employing data-driven techniques when analyzing breaks in time-series, rather than the simplifying practice of imposing policy implementation dates as break dates. The paper also tests Granger's (1980) aggregation theory and highlights some policy implications of the results.
Resumo:
This article focuses on the deviations from normality of stock returns before and after a financial liberalisation reform, and shows the extent to which inference based on statistical measures of stock market efficiency can be affected by not controlling for breaks. Drawing from recent advances in the econometrics of structural change, it compares the distribution of the returns of five East Asian emerging markets when breaks in the mean and variance are either (i) imposed using certain official liberalisation dates or (ii) detected non-parametrically using a data-driven procedure. The results suggest that measuring deviations from normality of stock returns with no provision for potentially existing breaks incorporates substantial bias. This is likely to severely affect any inference based on the corresponding descriptive or test statistics.
Resumo:
This paper investigates the environmental sustainability and competitiveness perceptions of small farmers in a region in northern Brazil. The main data collection instruments included a survey questionnaire and an analysis of the region's strategic plan. In total, ninety-nine goat and sheep breeding farmers were surveyed. Data analysis methods included descriptive statistics, cluster analysis, and chi-squared tests. The main results relate to the impact of education, land size, and location on the farmers' perceptions of competitiveness and environmental issues. Farmers with longer periods of education have higher perception scores about business competitiveness and environmental sustainability than those with less formal education. Farmers who are working larger land areas also have higher scores than those with smaller farms. Lastly, location can yield factors that impact on farmers' perceptions. In our study, farmers located in Angicos and Lajes had higher perception scores than Pedro Avelino and Afonso Bezerra, despite the geographical proximity of these municipalities. On the other hand, three other profile variables did not impact on farmers' perceptions, namely: family income, dairy production volume, and associative condition. The authors believe the results and insights can be extended to livestock farming in other developing countries and contribute generally to fostering effective sustainable development policies, mainly in the agribusiness sector. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
This thesis is about the discretionary role of the line manager in inspiring the work engagement of staff and their resulting innovative behaviour examined through the lens of Social Exchange Theory (Blau, 1964) and the Job Demands-Resources theory (Bakker, Demerouti, Nachreiner & Schaufeli, 2001). The study is focused on a large British Public Sector organisation undergoing a major organisational shift in the way in which they operate as part of the public sector. It is often claimed that people do not leave organisations; they leave line managers (Kozlowski & Doherty, 1989). Regardless of the knowledge in the literature concerning the importance of the line manager in organisations (Purcell, 2003), the engagement literature in particular is lacking in the consideration of such a fundamental figure in organisational life. Further, the understanding of the black box of managerial discretion and its relationship to employee and organisation related outcomes would benefit from greater exploration (Purcell, 2003; Gerhart, 2005; Scott, et al, 2009). The purpose of this research is to address these gaps with relation to the innovative behaviour of employees in the public sector – an area that is not typically associated with the public sector (Bhatta, 2003; McGuire, Stoner & Mylona, 2008; Hughes, Moore & Kataria, 2011). The study is a CASE Award PhD thesis, requiring academic and practical elements to the research. The study is of one case organisation, focusing on one service characterised by a high level of adoption of Strategic Human Resource Management activities and operating in a rather unique manner for the public sector, having private sector competition for work. The study involved a mixed methods approach to data collection. Preliminary focus groups with 45 participants were conducted, followed by an ethnographic period of five months embedded into the service conducting interviews and observations. This culminated in a quantitative survey delivered within the wider directorate to approximately 500 staff members. The study used aspects of the Grounded Theory (Glaser & Strauss, 1967) approach to analyse the data and developed results that highlight the importance of the line manager in an area characterised by SHRM and organisational change for engaging employees and encouraging innovative behaviour. This survey was completed on behalf of the organisation and the findings of this are presented in appendix 1, in order to keep the focus of the PhD on theory development. Implications for theory and practice are discussed alongside the core finding. Line managers’ discretion surrounding the provision of job resources (in particular trust, autonomy and implementation and interpretation of combined bundles of SHRM policies and procedures) influenced the exchange process by which employees responded with work engagement and innovative behaviour. Limitations to the research are the limitations commonly attributed to cross-sectional data collection methods and those surrounding generalisability of the qualitative findings outside of the contextual factors characterising the service area. Suggestions for future research involve addressing these limitations and further exploration of the discretionary role with regards to extending our understanding of line manager discretion.
Resumo:
MOTIVATION: G protein-coupled receptors (GPCRs) play an important role in many physiological systems by transducing an extracellular signal into an intracellular response. Over 50% of all marketed drugs are targeted towards a GPCR. There is considerable interest in developing an algorithm that could effectively predict the function of a GPCR from its primary sequence. Such an algorithm is useful not only in identifying novel GPCR sequences but in characterizing the interrelationships between known GPCRs. RESULTS: An alignment-free approach to GPCR classification has been developed using techniques drawn from data mining and proteochemometrics. A dataset of over 8000 sequences was constructed to train the algorithm. This represents one of the largest GPCR datasets currently available. A predictive algorithm was developed based upon the simplest reasonable numerical representation of the protein's physicochemical properties. A selective top-down approach was developed, which used a hierarchical classifier to assign sequences to subdivisions within the GPCR hierarchy. The predictive performance of the algorithm was assessed against several standard data mining classifiers and further validated against Support Vector Machine-based GPCR prediction servers. The selective top-down approach achieves significantly higher accuracy than standard data mining methods in almost all cases.
Resumo:
Adopting a grounded theory methodology, the study describes how an event and pressure impact upon a process of deinstitutionalization and institutional change. Three case studies were theoretically sampled in relation to each other. They yielded mainly qualitative data from methods that included interviews, observations, participant observations, and document reviews. Each case consisted of a boundaried cluster of small enterprises that were not industry specific and were geographically dispersed. Overall findings describe how an event, i.e. a stimulus, causes disruption, which in turn may cause pressure. Pressure is then translated as a tension within the institutional environment, which is characterized by opposing forces that encourage institutional breakdown and institutional maintenance. Several contributions are made: Deinstitutionalization as a process is inextricable from the formation of institutions – both are needed to make sense of institutional change on a conceptual level but are also inseparable experientially in the field; stimuli are conceptually different to pressures; the historical basis of a stimulus may impact on whether pressure and institutional change occurs; pressure exists in a more dynamic capacity rather than only as a catalyst; institutional breakdown is a non-linear irregular process; ethical and survival pressures as new types were identified; institutional current, as an underpinning mechanism, influences how the tension between institutional breakdown and maintenance plays out.