966 resultados para financial data processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Estudi dels estàndards definits per l'Open Geospatial Consortium, i més concretament en l'estàndard Web Processing Service (wps). Així mateix, ha tingut una component pràctica que ha consistit en el disseny i desenvolupament d'un client capaç de consumir serveis Web creats segons wps i integrat a la plataforma gvSIG.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The sparsely spaced highly permeable fractures of the granitic rock aquifer at Stang-er-Brune (Brittany, France) form a well-connected fracture network of high permeability but unknown geometry. Previous work based on optical and acoustic logging together with single-hole and cross-hole flowmeter data acquired in 3 neighbouring boreholes (70-100 m deep) has identified the most important permeable fractures crossing the boreholes and their hydraulic connections. To constrain possible flow paths by estimating the geometries of known and previously unknown fractures, we have acquired, processed and interpreted multifold, single- and cross-hole GPR data using 100 and 250 MHz antennas. The GPR data processing scheme consisting of timezero corrections, scaling, bandpass filtering and F-X deconvolution, eigenvector filtering, muting, pre-stack Kirchhoff depth migration and stacking was used to differentiate fluid-filled fracture reflections from source generated noise. The final stacked and pre-stack depth-migrated GPR sections provide high-resolution images of individual fractures (dipping 30-90°) in the surroundings (2-20 m for the 100 MHz antennas; 2-12 m for the 250 MHz antennas) of each borehole in a 2D plane projection that are of superior quality to those obtained from single-offset sections. Most fractures previously identified from hydraulic testing can be correlated to reflections in the single-hole data. Several previously unknown major near vertical fractures have also been identified away from the boreholes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Solexa/Illumina short-read ultra-high throughput DNA sequencing technology produces millions of short tags (up to 36 bases) by parallel sequencing-by-synthesis of DNA colonies. The processing and statistical analysis of such high-throughput data poses new challenges; currently a fair proportion of the tags are routinely discarded due to an inability to match them to a reference sequence, thereby reducing the effective throughput of the technology. RESULTS: We propose a novel base calling algorithm using model-based clustering and probability theory to identify ambiguous bases and code them with IUPAC symbols. We also select optimal sub-tags using a score based on information content to remove uncertain bases towards the ends of the reads. CONCLUSION: We show that the method improves genome coverage and number of usable tags as compared with Solexa's data processing pipeline by an average of 15%. An R package is provided which allows fast and accurate base calling of Solexa's fluorescence intensity files and the production of informative diagnostic plots.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollardeutsche mark future exchange, finding good agreement between theory and the observed data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

DnaSP is a software package for a comprehensive analysis of DNA polymorphism data. Version 5 implements a number of new features and analytical methods allowing extensive DNA polymorphism analyses on large datasets. Among other features, the newly implemented methods allow for: (i) analyses on multiple data files; (ii) haplotype phasing; (iii) analyses on insertion/deletion polymorphism data; (iv) visualizing sliding window results integrated with available genome annotations in the UCSC browser.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For number of reasons social responsibility in corporations has become a more essential part of business operations than before. Corporate social responsibility (CSR) is dealt with different means and aspects but the overall effects it has on organisations performance, communication and underline actions is indisputable. The thesis describes corporate social responsibility and the main objective was to observe how corporate social responsibility has developed in our case company with answering to main research question how CSR reporting has evolved in UPM-Kymmene Oyj? In addition following questions were also addressed: Is there a monetary value of CSR? What does proficient CSR report consist of? What does corporate social responsibility consist of? Qualitative research method, content analysis to be precise, was chosen and excessive literature study performed to find the theoretical back ground to perform the empirical part of the study. Data for the empirical part was collected from UPM-Kymmene Oyj financial data and annual reports. The study shows that UPM-Kymmene Oyj engagement to CSR and reporting of CSR matter have improved due time but still few managerial implications could be found. UPM-Kymmene Oyj economic key figures are only building shareholder value and stakeholders are identified in very general level. Also CSR data is scattered all over the annual report which causes problems to readers. The scientific importance of this thesis arises from the profound way CSR has been addressed in a holistic manner. Thus it is giving a good basis to understand the underlying reasons of CSR from society towards the organisation and vice versa.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The definition of corporate social responsibility (CSR) has been developed since 1950s but even today there is no consensus what CSR includes. The main purpose of this thesis was to find out whether financial performance is better among first adopters of CSR standards in forest industry. To support the main purpose it was critical also investigate what kind of companies adopt CSR standards. The empirical part of the thesis based on a survey which was done in 2010 to forest industry companies and financial data that was gathered from different databases from years 2003-2010. According to the research results it seems the early CSR standards adopters benefits the position of the first adopter many times. Especially cash position and solvency of early adopter companies were better than later adopters or those who did not adopt CSR standards at all. Profitability seemed to be better among CSR standards adopters but early adopters did not have significantly better position compared to later adopters. CSR standards adopters were companies that considered themselves as environmental performance pioneers and had employee oriented management.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this study was to contribute to the current knowledge-based theory by focusing on a research gap that exists in the empirically proven determination of the simultaneous but differentiable effects of intellectual capital (IC) assets and knowledge management (KM) practices on organisational performance (OP). The analysis was built on the past research and theoreticised interactions between the latent constructs specified using the survey-based items that were measured from a sample of Finnish companies for IC and KM and the dependent construct for OP determined using information available from financial databases. Two widely used and commonly recommended measures in the literature on management science, i.e. the return on total assets (ROA) and the return on equity (ROE), were calculated for OP. Thus the investigation of the relationship between IC and KM impacting OP in relation to the hypotheses founded was possible to conduct using objectively derived performance indicators. Using financial OP measures also strengthened the dynamic features of data needed in analysing simultaneous and causal dependences between the modelled constructs specified using structural path models. The estimates were obtained for the parameters of structural path models using a partial least squares-based regression estimator. Results showed that the path dependencies between IC and OP or KM and OP were always insignificant when analysed separate to any other interactions or indirect effects caused by simultaneous modelling and regardless of the OP measure used that was either ROA or ROE. The dependency between the constructs for KM and IC appeared to be very strong and was always significant when modelled simultaneously with other possible interactions between the constructs and using either ROA or ROE to define OP. This study, however, did not find statistically unambiguous evidence for proving the hypothesised causal mediation effects suggesting, for instance, that the effects of KM practices on OP are mediated by the IC assets. Due to the fact that some indication about the fluctuations of causal effects was assessed, it was concluded that further studies are needed for verifying the fundamental and likely hidden causal effects between the constructs of interest. Therefore, it was also recommended that complementary modelling and data processing measures be conducted for elucidating whether the mediation effects occur between IC, KM and OP, the verification of which requires further investigations of measured items and can be build on the findings of this study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This master’s thesis examines the effects of increased material recycling on different waste-to-energy concepts. With background study and a developed techno-economic computational method the feasibility of chosen scenarios with different combinations of mechanical treatment and waste firing technologies can be evaluated. The background study covers the waste scene of Finland, and potential market areas Poland and France. Calculated cases concentrate on municipal solid waste treatment in the Finnish operational environment. The chosen methodology to approach the objectives is techno-economic feasibility assessment. It combines calculation methods of literature and practical engineering to define the material and energy balances in chosen scenarios. The calculation results together with other operational and financial data can be concluded to net present values compared between the scenarios. For the comparison, four scenarios, most vital and alternative between each other, are established. The baseline scenario is grate firing of source separated mixed municipal solid waste. Second scenario is fluidized bed combustion of solid recovered fuel produced in mechanical treatment process with metal separation. Third scenario combines a biomaterial separation process to the solid recovered fuels preparation and in the last scenario plastics are separated in addition to the previous operations. The results indicated that the mechanical treatment scenarios still need to overcome some problems to become feasible. Problems are related to profitability, residue disposal and technical reliability. Many uncertainties are also related to the data gathered over waste characteristics, technical performance and markets. With legislative support and development of further processing technologies and markets of the recycled materials the scenarios with biomaterial and plastic separation may operate feasibly in the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the fundamental problems with image processing of petrographic thin sections is that the appearance (colour I intensity) of a mineral grain will vary with the orientation of the crystal lattice to the preferred direction of the polarizing filters on a petrographic microscope. This makes it very difficult to determine grain boundaries, grain orientation and mineral species from a single captured image. To overcome this problem, the Rotating Polarizer Stage was used to replace the fixed polarizer and analyzer on a standard petrographic microscope. The Rotating Polarizer Stage rotates the polarizers while the thin section remains stationary, allowing for better data gathering possibilities. Instead of capturing a single image of a thin section, six composite data sets are created by rotating the polarizers through 900 (or 1800 if quartz c-axes measurements need to be taken) in both plane and cross polarized light. The composite data sets can be viewed as separate images and consist of the average intensity image, the maximum intensity image, the minimum intensity image, the maximum position image, the minimum position image and the gradient image. The overall strategy used by the image processing system is to gather the composite data sets, determine the grain boundaries using the gradient image, classify the different mineral species present using the minimum and maximum intensity images and then perform measurements of grain shape and, where possible, partial crystallographic orientation using the maximum intensity and maximum position images.