783 resultados para open data value chain


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Community capacity is used to monitor socio-economic development. It is composed of a number of dimensions, which can be measured to understand the possible issues in the implementation of a policy or the outcome of a project targeting a community. Measuring community capacity dimensions is usually expensive and time consuming, requiring locally organised surveys. Therefore, we investigate a technique to estimate them by applying the Random Forests algorithm on secondary open government data. This research focuses on the prediction of measures for two dimensions: sense of community and participation. The most important variables for this prediction were determined. The variables included in the datasets used to train the predictive models complied with two criteria: nationwide availability; sufficiently fine-grained geographic breakdown, i.e. neighbourhood level. The models explained 77% of the sense of community measures and 63% of participation. Due to the low geographic detail of the outcome measures available, further research is required to apply the predictive models to a neighbourhood level. The variables that were found to be more determinant for prediction were only partially in agreement with the factors that, according to the social science literature consulted, are the most influential for sense of community and participation. This finding should be further investigated from a social science perspective, in order to be understood in depth.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Con la creciente popularidad de las soluciones de IT como factor clave para aumentar la competitividad y la creación de valor para las empresas, la necesidad de invertir en proyectos de IT se incrementa considerablemente. La limitación de los recursos como un obstáculo para invertir ha obligado a las empresas a buscar metodologías para seleccionar y priorizar proyectos, asegurándose de que las decisiones que se toman son aquellas que van alineadas con las estrategias corporativas para asegurar la creación de valor y la maximización de los beneficios. Esta tesis proporciona los fundamentos para la implementación del Portafolio de dirección de Proyectos de IT (IT PPM) como una metodología eficaz para la gestión de proyectos basados en IT, y una herramienta para proporcionar criterios claros para los directores ejecutivos para la toma de decisiones. El documento proporciona la información acerca de cómo implementar el IT PPM en siete pasos, el análisis de los procesos y las funciones necesarias para su ejecución exitosa. Además, proporciona diferentes métodos y criterios para la selección y priorización de proyectos. Después de la parte teórica donde se describe el IT PPM, la tesis aporta un análisis del estudio de caso de una empresa farmacéutica. La empresa ya cuenta con un departamento de gestión de proyectos, pero se encontró la necesidad de implementar el IT PPM debido a su amplia cobertura de procesos End-to-End en Proyectos de IT, y la manera de asegurar la maximización de los beneficios. Con la investigación teórica y el análisis del estudio de caso, la tesis concluye con una definición práctica de un modelo aproximado IT PPM como una recomendación para su implementación en el Departamento de Gestión de Proyectos.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is a clear need for financial protection in the construction industry, both to guarantee satisfactory completion of construction projects and to guard against non-payment. However, the cost of financial protection is often felt to be disproportionately high, with unnecessary overlap between different measures. The Reading Construction Forum has commissioned and steered research which is published in this report in an effort to bring the problem out into the open and to clarify the various options open to the various parties and stakeholders in the construction process. "Financial Protection in the UK Building Industry" is the first definitive report on the subject, offering an accurate and simple guide that all levels within the construction industry can understand. This accessible new guide considers the problem of financial protection and clearly lays out the alternative solutions.It looks by turn at the client, the main contractor, and the sub-contractor, discussing which financial protection options are available to each of them, and considers the pros and cons of each option. The cost of each type of financial protection is weighed against the amount of protection provided and the risks involved. The book concludes with guidance for consultants, emphasising relevant points to consider when advising clients and contractors about which type of financial protection to choose. "Financial Protection in the UK Building Industry" was researched by a literature search, collection of statistical data, and financial data, as well as discussions with clients, contractors, sub-contractors and consultants. This investigation has shown that the direct costs of implementing financial protection measures are marginal, and that wider adoption of payment protection would create a more equitable situation between contracting parties.This guide will enable anyone in the construction industry to consider all the options, and determine what is the best solution for them. "Reading Construction Forum Financial Protection for the UK Building Industry" was complied by the University of Reading, funded by the Reading Construction Forum. The Forum has recently commissioned and steered a number of high-profile reports covering important aspects of the construction industry. Members of the Forum include major companies which are concerned with achieving high quality in the design, construction and use of commercial, retail and industrial buildings. All are committed to change and innovation in the British and European construction industries.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We use the third perihelion pass by the Ulysses spacecraft to illustrate and investigate the “flux excess” effect, whereby open solar flux estimates from spacecraft increase with increasing heliocentric distance. We analyze the potential effects of small-scale structure in the heliospheric field (giving fluctuations in the radial component on timescales smaller than 1 h) and kinematic time-of-flight effects of longitudinal structure in the solar wind flow. We show that the flux excess is explained by neither very small-scale structure (timescales < 1 h) nor by the kinematic “bunching effect” on spacecraft sampling. The observed flux excesses is, however, well explained by the kinematic effect of larger-scale (>1 day) solar wind speed variations on the frozen-in heliospheric field. We show that averaging over an interval T (that is long enough to eliminate structure originating in the heliosphere yet small enough to avoid cancelling opposite polarity radial field that originates from genuine sector structure in the coronal source field) is only an approximately valid way of allowing for these effects and does not adequately explain or account for differences between the streamer belt and the polar coronal holes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The near-Earth heliospheric magnetic field intensity, |B|, exhibits a strong solar cycle variation, but returns to the same ``floor'' value each solar minimum. The current minimum, however, has seen |B| drop below previous minima, bringing in to question the existence of a floor, or at the very least requiring a re-assessment of its value. In this study we assume heliospheric flux consists of a constant open flux component and a time-varying contribution from CMEs. In this scenario, the true floor is |B| with zero CME contribution. Using observed CME rates over the solar cycle, we estimate the ``no-CME'' |B| floor at ~4.0 +/- 0.3 nT, lower than previous floor estimates and below |B| observed this solar minimum. We speculate that the drop in |B| observed this minimum may be due to a persistently lower CME rate than the previous minimum, though there are large uncertainties in the supporting observational data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The construction industry has incurred a considerable amount of waste as a result of poor logistics supply chain network management. Therefore, managing logistics in the construction industry is critical. An effective logistic system ensures delivery of the right products and services to the right players at the right time while minimising costs and rewarding all sectors based on value added to the supply chain. This paper reports on an on-going research study on the concept of context-aware services delivery in the construction project supply chain logistics. As part of the emerging wireless technologies, an Intelligent Wireless Web (IWW) using context-aware computing capability represents the next generation ICT application to construction-logistics management. This intelligent system has the potential of serving and improving the construction logistics through access to context-specific data, information and services. Existing mobile communication deployments in the construction industry rely on static modes of information delivery and do not take into account the worker’s changing context and dynamic project conditions. The major problems in these applications are lack of context-specificity in the distribution of information, services and other project resources, and lack of cohesion with the existing desktop based ICT infrastructure. The research works focus on identifying the context dimension such as user context, environmental context and project context, selection of technologies to capture context-parameters such wireless sensors and RFID, selection of supporting technologies such as wireless communication, Semantic Web, Web Services, agents, etc. The process of integration of Context-Aware Computing and Web-Services to facilitate the creation of intelligent collaboration environment for managing construction logistics will take into account all the necessary critical parameters such as storage, transportation, distribution, assembly, etc. within off and on-site project.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the decade since OceanObs `99, great advances have been made in the field of ocean data dissemination. The use of Internet technologies has transformed the landscape: users can now find, evaluate and access data rapidly and securely using only a web browser. This paper describes the current state of the art in dissemination methods for ocean data, focussing particularly on ocean observations from in situ and remote sensing platforms. We discuss current efforts being made to improve the consistency of delivered data and to increase the potential for automated integration of diverse datasets. An important recent development is the adoption of open standards from the Geographic Information Systems community; we discuss the current impact of these new technologies and their future potential. We conclude that new approaches will indeed be necessary to exchange data more effectively and forge links between communities, but these approaches must be evaluated critically through practical tests, and existing ocean data exchange technologies must be used to their best advantage. Investment in key technology components, cross-community pilot projects and the enhancement of end-user software tools will be required in order to assess and demonstrate the value of any new technology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a new methodology that couples neutron diffraction experiments over a wide Q range with single chain modelling in order to explore, in a quantitative manner, the intrachain organization of non-crystalline polymers. The technique is based on the assignment of parameters describing the chemical, geometric and conformational characteristics of the polymeric chain, and on the variation of these parameters to minimize the difference between the predicted and experimental diffraction patterns. The method is successfully applied to the study of molten poly(tetrafluoroethylene) at two different temperatures, and provides unambiguous information on the configuration of the chain and its degree of flexibility. From analysis of the experimental data a model is derived with CC and CF bond lengths of 1.58 and 1.36 Å, respectively, a backbone valence angle of 110° and a torsional angle distribution which is characterized by four isometric states, namely a split trans state at ± 18°, giving rise to a helical chain conformation, and two gauche states at ± 112°. The probability of trans conformers is 0.86 at T = 350°C, which decreases slightly to 0.84 at T = 400°C. Correspondingly, the chain segments are characterized by long all-trans sequences with random changes in sign, rather anisotropic in nature, which give rise to a rather stiff chain. We compare the results of this quantitative analysis of the experimental scattering data with the theoretical predictions of both force fields and molecular orbital conformation energy calculations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Pervasive computing is a continually, and rapidly, growing field, although still remains in relative infancy. The possible applications for the technology are numerous, and stand to fundamentally change the way users interact with technology. However, alongside these are equally numerous potential undesirable effects and risks. The lack of empirical naturalistic data in the real world makes studying the true impacts of this technology difficult. This paper describes how two independent research projects shared such valuable empirical data on the relationship between pervasive technologies and users. Each project had different aims and adopted different methods, but successfully used the same data and arrived at the same conclusions. This paper demonstrates the benefit of sharing research data in multidisciplinary pervasive computing research where real world implementations are not widely available.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data from civil engineering projects can inform the operation of built infrastructure. This paper captures lessons for such data handover, from projects into operations, through interviews with leading clients and their supply chain. Clients are found to value receiving accurate and complete data. They recognise opportunities to use high quality information in decision-making about capital and operational expenditure; as well as in ensuring compliance with regulatory requirements. Providing this value to clients is a motivation for information management in projects. However, data handover is difficult as key people leave before project completion; and different data formats and structures are used in project delivery and operations. Lessons learnt from leading practice include defining data requirements at the outset, getting operations teams involved early, shaping the evolution of interoperable systems and standards, developing handover processes to check data rather than documentation, and fostering skills to use and update project data in operations

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traditionally, the formal scientific output in most fields of natural science has been limited to peer- reviewed academic journal publications, with less attention paid to the chain of intermediate data results and their associated metadata, including provenance. In effect, this has constrained the representation and verification of the data provenance to the confines of the related publications. Detailed knowledge of a dataset’s provenance is essential to establish the pedigree of the data for its effective re-use, and to avoid redundant re-enactment of the experiment or computation involved. It is increasingly important for open-access data to determine their authenticity and quality, especially considering the growing volumes of datasets appearing in the public domain. To address these issues, we present an approach that combines the Digital Object Identifier (DOI) – a widely adopted citation technique – with existing, widely adopted climate science data standards to formally publish detailed provenance of a climate research dataset as an associated scientific workflow. This is integrated with linked-data compliant data re-use standards (e.g. OAI-ORE) to enable a seamless link between a publication and the complete trail of lineage of the corresponding dataset, including the dataset itself.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We analyse the widely-used international/ Zürich sunspot number record, R, with a view to quantifying a suspected calibration discontinuity around 1945 (which has been termed the “Waldmeier discontinuity” [Svalgaard, 2011]). We compare R against the composite sunspot group data from the Royal Greenwich Observatory (RGO) network and the Solar Optical Observing Network (SOON), using both the number of sunspot groups, N{sub}G{\sub}, and the total area of the sunspots, A{sub}G{\sub}. In addition, we compare R with the recently developed interdiurnal variability geomagnetic indices IDV and IDV(1d). In all four cases, linearity of the relationship with R is not assumed and care is taken to ensure that the relationship of each with R is the same before and after the putative calibration change. It is shown the probability that a correction is not needed is of order 10{sup}−8{\sup} and that R is indeed too low before 1945. The optimum correction to R for values before 1945 is found to be 11.6%, 11.7%, 10.3% and 7.9% using A{sub}G{\sub}, N{sub)G{\sub}, IDV, and IDV(1d), respectively. The optimum value obtained by combining the sunspot group data is 11.6% with an uncertainty range 8.1-14.8% at the 2σ level. The geomagnetic indices provide an independent yet less stringent test but do give values that fall within the 2σ uncertainty band with optimum values are slightly lower than from the sunspot group data. The probability of the correction needed being as large as 20%, as advocated by Svalgaard [2011], is shown to be 1.6 × 10{sup}−5{\sup}.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We investigate the relationship between interdiurnal variation geomagnetic activity indices, IDV and IDV(1d), corrected sunspot number, R{sub}C{\sub}, and the group sunspot number R{sub}G{\sub}. R{sub}C{\sub} uses corrections for both the “Waldmeier discontinuity”, as derived in Paper 1 [Lockwood et al., 2014c], and the “Wolf discontinuity” revealed by Leussu et al. [2013]. We show that the simple correlation of the geomagnetic indices with R{sub}C{\sub}{sup}n{\sup} or R{sub}G{\sub}{sup}n{\sup} masks a considerable solar cycle variation. Using IDV(1d) or IDV to predict or evaluate the sunspot numbers, the errors are almost halved by allowing for the fact that the relationship varies over the solar cycle. The results indicate that differences between R{sub}C{\sub} and R{sub}G{\sub} have a variety of causes and are highly unlikely to be attributable to errors in either R{sub}G{\sub} alone, as has recently been assumed. Because it is not known if R{sub}C{\sub} or R{sub}G{\sub} is a better predictor of open flux emergence before 1874, a simple sunspot number composite is suggested which, like R{sub}G{\sub}, enables modelling of the open solar flux for 1610 onwards in Paper 3, but maintains the characteristics of R{sub}C{\sub}.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The correlation between the coronal source flux F_{S} and the total solar irradiance I_{TS} is re-evaluated in the light of an additional 5 years' data from the rising phase of solar cycle 23 and also by using cosmic ray fluxes detected at Earth. Tests on monthly averages show that the correlation with F_{S} deduced from the interplanetary magnetic field (correlation coefficient, r = 0.62) is highly significant (99.999%), but that there is insufficient data for the higher correlation with annual means (r = 0.80) to be considered significant. Anti-correlations between I_{TS} and cosmic ray fluxes are found in monthly data for all stations and geomagnetic rigidity cut-offs (r ranging from −0.63 to −0.74) and these have significance levels between 85% and 98%. In all cases, the t is poorest for the earliest data (i.e., prior to 1982). Excluding these data improves the anticorrelation with cosmic rays to r = −0:93 for one-year running means. Both the interplanetary magnetic field data and the cosmic ray fluxes indicate that the total solar irradiance lags behind the open solar flux with a delay that is estimated to have an optimum value of 2.8 months (and is within the uncertainty range 0.8-8.0 months at the 90% level).