899 resultados para ree software environment for statistical computing and graphics R
Resumo:
The importance of R&D investment in explaining economic growth is well documented in the literature. Policies by modern governments increasingly recognise the benefits of supporting R&D investment. Government funding has, however, become an increasingly scarce resource in times of financial crisis and economic austerity. Hence, it is important that available funds are used and targeted effectively. This paper offers the first systematic review and critical discussion of what the R&D literature has to say currently about the effectiveness of major public R&D policies in increasing private R&D investment. Public policies are considered within three categories, R&D tax credits and direct subsidies, support of the university research system and the formation of high-skilled human capital, and support of formal R&D cooperations across a variety of institutions. Crucially, the large body of more recent literature observes a shift away from the earlier findings that public subsidies often crowd-out private R&D to finding that subsidies typically stimulate private R&D. Tax credits are also much more unanimously than previously found to have positive effects. University research, high-skilled human capital, and R&D cooperation also typically increase private R&D. Recent work indicates that accounting for non-linearities is one area of research that may refine existing results. © 2014 John Wiley & Sons Ltd.
Resumo:
Rivals may voluntarily share Research and Development (R&D) results even in the absence of any binding agreements or collusion. In a model where rival firms engage in non-cooperative independent R&D process, we used optimization and game theory analysis to study the equilibrium strategy of the firms. Our work showed that, while minimal spillover is always equilibrium, there may be another equilibrium where firms may reciprocally choose high, sometimes perfect, spillover rates. The incentive for sharing R&D output is based on firms' expectations of learning from their rivals' R&D progress in the future. This leads to strategic complementarities between the firms' choices of spillover rates and thus policy implication follows. ^ Public research agencies can contribute more to social welfare by providing research as public goods. In a non-cooperative public-private research relationship where parallel R&D is conducted, by making its R&D results accessible, the public research agency can stimulate private spillovers, even if there exists rivalry among the private firms who can benefit from such spillovers. ^
Resumo:
The data given in this and previous communications is insufficient to assess the quantitative role of these supplementary sources in the Indian Ocean, but they do not rule out their local significance. Elucidation of this problem requires further data on the characteristics of the composition and structure of nodules in various different metallogenic regions of the ocean floor. A study of the distribution of ore elements in nodules both depthwise and over the area of the floor together with compilation of the first schematic maps based on the results of analyses of samples from 54 stations) enables us to give a more precise empirical relation between the Mn, Fe, Ni, Cu, and Co contents in Indian Ocean nodules, the manganese ratio and the values of the oxidation potential, which vary regularly with depth. This in turn also enables us to confirm that formation of nodules completes the prolonged process of deposition of ore components from ocean waters, and the complex physico-chemical transformations of sediments in the bottom layer. Microprobe investigation of ore rinds revealed the nonuniform distribution of a num¬ber of elements within them, owing to the capacity of particles of hydrated oxides of manganese and iron to adsorb various elements. High concentration of individual elements is correlated with local sectors of the ore rinds, in which the presence of todorokite, in particular, has been noted. The appearance of this mineral apparently requires elevated Ca, Mg, Na, and K concentrations, because the stable crystalline phase of this specific mineral form of the psilomelane group may be formed when these cations are incorporated into a lattice of the delta-MnO2 type.
Resumo:
This paper analyzes the determinants of R&D offshoring of Spanish firms using information from the Panel of Technological Innovation. We find that being an exporter, international technological cooperation, continuous R&D engagement, applying for patents, being a for-eign subsidiary, and firm size are factors that positively affect the decision to offshore R&D. In addition, we find that a lack of financing is an obstacle relatively more important for inde-pendent firms than for firms that belong to business groups. For these latter, we also obtain that the factors that influence the decision to offshore R&D differ depending on whether the firm purchases the R&D services within the group or through the market: a higher degree of importance assigned to internal sources of information for innovation as compared to mar-ket sources increases (decreases) the probability of R&D offshoring only through the group (market).
Resumo:
Statistical computing when input/output is driven by a Graphical User Interface is considered. A proposal is made for automatic control ofcomputational flow to ensure that only strictly required computationsare actually carried on. The computational flow is modeled by a directed graph for implementation in any object-oriented programming language with symbolic manipulation capabilities. A complete implementation example is presented to compute and display frequency based piecewise linear density estimators such as histograms or frequency polygons.
Resumo:
This paper presents the analysis and evaluation of the Power Electronics course at So Paulo State University-UNESP-Campus of Ilha Solteira(SP)-Brazil, which includes the usage of interactive Java simulations tools and an educational software to aid the teaching of power electronic converters. This platform serves as an oriented course for the lectures and supplementary support for laboratory experiments in the power electronics courses. The simulation tools provide an interactive and dynamic way to visualize the power electronics converters behavior together with the educational software, which contemplates the theory and a list of subjects for circuit simulations. In order to verify the performance and the effectiveness of the proposed interactive educational platform, it is presented a statistical analysis considering the last three years. © 2011 IEEE.
Resumo:
A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Statistical organizations of the Caribbean countries continue to face serious challenges posed by the increased demand for more relevant, accurate and timely statistical data. Tangible progress has been made in delivering key products in the area of economic statistics. The central banks of the subregion have assisted greatly in this respect. However, even in this branch of statistics there are still several glaring gaps. The situation is even worse in other areas of statistics including social and environmental statistics. Even though all countries of the subregion have committed to the Millennium Development Goals (MDGs) as well as to other internationally agreed development goals serious challenges remain with respect to the compilation of the agreed indicators to assist in assessing progress towards the goals. It is acknowledged that appreciable assistance has been provided by the various donor agencies to develop statistical competence. This assistance has translated into the many gains that have been made. However, the national statistical organizations require much more help if they are to reach the plateau of self reliance in the production of the necessary statistical services. The governments of the subregion have also committed to invest more in statistical development and in promoting a statistics culture in the Caribbean. The training institutions of the subregion have also started to address this urgent need by broadening and deepening their teaching curricula. Funding support is urgently required to develop the appropriate cadre of statistical professionals to deliver the required outputs. However, this training must be continuous and must be sustained over an appropriate period since the current turnover of trained staff is high. This programme of training will need to be intensive for a period of at least five years after which it may be reduced. The modalities of training will also have to be more focused and in addition to formal training at educational institutions there is much room for on-the-job training, group training at the national level and much more south-south capacity building. There is also an urgent need to strengthen cooperation and collaboration among the donor community in the delivery of assistance for statistical development. Several development agencies with very good intentions are currently operating in the Caribbean. There is a danger however, that efforts can be duplicated if agencies do not collaborate adequately. Development agencies therefore need to consult with each other much more and share there development agenda more freely if duplication is to be averted. Moreover, the pooling of resources can surely maximize the benefits to the countries of the subregion.
Resumo:
In the present study we are using multi variate analysis techniques to discriminate signal from background in the fully hadronic decay channel of ttbar events. We give a brief introduction to the role of the Top quark in the standard model and a general description of the CMS Experiment at LHC. We have used the CMS experiment computing and software infrastructure to generate and prepare the data samples used in this analysis. We tested the performance of three different classifiers applied to our data samples and used the selection obtained with the Multi Layer Perceptron classifier to give an estimation of the statistical and systematical uncertainty on the cross section measurement.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.