959 resultados para Multi-product bank


Relevância:

30.00% 30.00%

Publicador:

Resumo:

CODE, the Center for Orbit Determination in Europe, is a joint venture of the following four institutions: Astronomical Institute, University of Bern (AIUB), Bern, Switzerland; Federal Office of Topography swisstopo, Wabern, Switzerland; Federal Agency of Cartography and Geodesy (BKG), Frankfurt a. M., Germany; Institut für Astronomische und Physikalische Geodäsie, Technische Universität München (IAPG, TUM), Munich, Germany. It acts as a global analysis center of the International GNSS Service (IGS). The operational computations are performed at AIUB using the latest development version of the Bernese GNSS Software. In this context a multi-GNSS solution is generated considering all active GPS, GLONASS, Galileo, BeiDou (expect for GEOs), and QZSS satellites as a contribution to the IGS-MGEX project. The results are published with a delay of about two weeks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2005, the International Ocean Colour Coordinating Group (IOCCG) convened a working group to examine the state of the art in ocean colour data merging, which showed that the research techniques had matured sufficiently for creating long multi-sensor datasets (IOCCG, 2007). As a result, ESA initiated and funded the DUE GlobColour project (http://www.globcolour.info/) to develop a satellite based ocean colour data set to support global carbon-cycle research. It aims to satisfy the scientific requirement for a long (10+ year) time-series of consistently calibrated global ocean colour information with the best possible spatial coverage. This has been achieved by merging data from the three most capable sensors: SeaWiFS on GeoEye's Orbview-2 mission, MODIS on NASA's Aqua mission and MERIS on ESA's ENVISAT mission. In setting up the GlobColour project, three user organisations were invited to help. Their roles are to specify the detailed user requirements, act as a channel to the broader end user community and to provide feedback and assessment of the results. The International Ocean Carbon Coordination Project (IOCCP) based at UNESCO in Paris provides direct access to the carbon cycle modelling community's requirements and to the modellers themselves who will use the final products. The UK Met Office's National Centre for Ocean Forecasting (NCOF) in Exeter, UK, provides an understanding of the requirements of oceanography users, and the IOCCG bring their understanding of the global user needs and valuable advice on best practice within the ocean colour science community. The three year project kicked-off in November 2005 under the leadership of ACRI-ST (France). The first year was a feasibility demonstration phase that was successfully concluded at a user consultation workshop organised by the Laboratoire d'Océanographie de Villefranche, France, in December 2006. Error statistics and inter-sensor biases were quantified by comparison with insitu measurements from moored optical buoys and ship based campaigns, and used as an input to the merging. The second year was dedicated to the production of the time series. In total, more than 25 Tb of input (level 2) data have been ingested and 14 Tb of intermediate and output products created, with 4 Tb of data distributed to the user community. Quality control (QC) is provided through the Diagnostic Data Sets (DDS), which are extracted sub-areas covering locations of in-situ data collection or interesting oceanographic phenomena. This Full Product Set (FPS) covers global daily merged ocean colour products in the time period 1997-2006 and is also freely available for use by the worldwide science community at http://www.globcolour.info/data_access_full_prod_set.html. The GlobColour service distributes global daily, 8-day and monthly data sets at 4.6 km resolution for, chlorophyll-a concentration, normalised water-leaving radiances (412, 443, 490, 510, 531, 555 and 620 nm, 670, 681 and 709 nm), diffuse attenuation coefficient, coloured dissolved and detrital organic materials, total suspended matter or particulate backscattering coefficient, turbidity index, cloud fraction and quality indicators. Error statistics from the initial sensor characterisation are used as an input to the merging methods and propagate through the merging process to provide error estimates for the output merged products. These error estimates are a key component of GlobColour as they are invaluable to the users; particularly the modellers who need them in order to assimilate the ocean colour data into ocean simulations. An intensive phase of validation has been undertaken to assess the quality of the data set. In addition, inter-comparisons between the different merged datasets will help in further refining the techniques used. Both the final products and the quality assessment were presented at a second user consultation in Oslo on 20-22 November 2007 organised by the Norwegian Institute for Water Research (NIVA); presentations are available on the GlobColour WWW site. On request of the ESA Technical Officer for the GlobColour project, the FPS data set was mirrored in the PANGAEA data library.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The improvement of financial intermediation functions is crucial for a robust banking system. When lending, banks have to cope with such problems as information asymmetry and adverse selection. In order to mitigate these problems, banks have to product information and improve their techniques of lending. During the 1998 financial crisis, Indonesia's banking system suffered severe damage and revealed that the country's banking intermediation functions did not work well. This paper examines the financial intermediation functions of banks in Indonesia and analyzes the importance of bank lending to firms. The focus is on medium-sized firms, and "relationship lending", one of the bank lending techniques, is used to examine financial intermediation in Indonesia. The results of logit regressions show that the relationship between a bank and a firm affects the probability of bank lending. The amount of borrowing and collateral are also affected by a firm's relationship with a bank. When viewed from the standpoint of relationship lending to medium-sized firms, Indonesian banks cannot be criticized for any malfunction of financial intermediation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demostrated using experimental data obtained on osmotic dehydratation of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses), namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality). Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP) and the Tabular Method (TM), were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In coffee processing the fermentation stage is considered one of the critical operations by its impact on the final quality of the product. However, the level of control of the fermentation process on each farm is often not adequate; the use of sensorics for controlling coffee fermentation is not common. The objective of this work is to characterize the fermentation temperature in a fermentation tank by applying spatial interpolation and a new methodology of data analysis based on phase space diagrams of temperature data, collected by means of multi-distributed, low cost and autonomous wireless sensors. A real coffee fermentation was supervised in the Cauca region (Colombia) with a network of 24 semi-passive TurboTag RFID temperature loggers with vacuum plastic cover, submerged directly in the fermenting mass. Temporal evolution and spatial distribution of temperature is described in terms of the phase diagram areas which characterizes the cyclic behaviour of temperature and highlights the significant heterogeneity of thermal conditions at different locations in the tank where the average temperature of the fermentation was 21.2 °C, although there were temperature ranges of 4.6°C, and average spatial standard deviation of ±1.21ºC. In the upper part of the tank we found high heterogeneity of temperatures, the higher temperatures and therefore the higher fermentation rates. While at the bottom, it has been computed an area in the phase diagram practically half of the area occupied by the sensors of the upper tank, therefore this location showed higher temperature homogeneity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The multi-dimensional classification problem is a generalisation of the recently-popularised task of multi-label classification, where each data instance is associated with multiple class variables. There has been relatively little research carried out specific to multi-dimensional classification and, although one of the core goals is similar (modelling dependencies among classes), there are important differences; namely a higher number of possible classifications. In this paper we present method for multi-dimensional classification, drawing from the most relevant multi-label research, and combining it with important novel developments. Using a fast method to model the conditional dependence between class variables, we form super-class partitions and use them to build multi-dimensional learners, learning each super-class as an ordinary class, and thus explicitly modelling class dependencies. Additionally, we present a mechanism to deal with the many class values inherent to super-classes, and thus make learning efficient. To investigate the effectiveness of this approach we carry out an empirical evaluation on a range of multi-dimensional datasets, under different evaluation metrics, and in comparison with high-performing existing multi-dimensional approaches from the literature. Analysis of results shows that our approach offers important performance gains over competing methods, while also exhibiting tractable running time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fermentation stage is considered to be one of the critical steps in coffee processing due to its impact on the final quality of the product. The objective of this work is to characterise the temperature gradients in a fermentation tank by multi-distributed, low-cost and autonomous wireless sensors (23 semi-passive TurboTag® radio-frequency identifier (RFID) temperature loggers). Spatial interpolation in polar coordinates and an innovative methodology based on phase space diagrams are used. A real coffee fermentation process was supervised in the Cauca region (Colombia) with sensors submerged directly in the fermenting mass, leading to a 4.6 °C temperature range within the fermentation process. Spatial interpolation shows a maximum instant radial temperature gradient of 0.1 °C/cm from the centre to the perimeter of the tank and a vertical temperature gradient of 0.25 °C/cm for sensors with equal polar coordinates. The combination of spatial interpolation and phase space graphs consistently enables the identification of five local behaviours during fermentation (hot and cold spots).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Publicación de la obra de Caja Granada

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several studies have analyzed discretionary accruals to address earnings-smoothing behaviors in the banking industry. We argue that the characteristic link between accruals and earnings may be nonlinear, since both the incentives to manipulate income and the practical way to do so depend partially on the relative size of earnings. Given a sample of 15,268 US banks over the period 1996–2011, the main results in this paper suggest that, depending on the size of earnings, bank managers tend to engage in earnings-decreasing strategies when earnings are negative (“big-bath”), use earnings-increasing strategies when earnings are positive, and use provisions as a smoothing device when earnings are positive and substantial (“cookie-jar” accounting). This evidence, which cannot be explained by the earnings-smoothing hypothesis, is consistent with the compensation theory. Neglecting nonlinear patterns in the econometric modeling of these accruals may lead to misleading conclusions regarding the characteristic strategies used in earnings management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides concordance procedures for product-level trade and production data in the EU and examines the implications of changing product classifications on measured product adding and dropping at Belgian firms. Using the algorithms developed by Pierce and Schott (2012a, 2012b), the paper develops concordance procedures that allow researchers to trace changes in coding systems over time and to translate product-level production and trade data into a common classification that is consistent both within a single year and over time. Separate procedures are created for the eightdigit Combined Nomenclature system used to classify international trade activities at the product level within the European Union as well as for the eight-digit Prodcom categories used to classify products in European domestic production data. The paper further highlights important differences in coverage between the Prodcom and Combined Nomenclature classifications which need to be taken into account when generating combined domestic production and international trade data at the product level. The use of consistent product codes over time results in less product adding and dropping at continuing firms in the Belgian export and production data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We estimate the 'fundamental' component of euro area sovereign bond yield spreads, i.e. the part of bond spreads that can be justified by country-specific economic factors, euro area economic fundamentals, and international influences. The yield spread decomposition is achieved using a multi-market, no-arbitrage affine term structure model with a unique pricing kernel. More specifically, we use the canonical representation proposed by Joslin, Singleton, and Zhu (2011) and introduce next to standard spanned factors a set of unspanned macro factors, as in Joslin, Priebsch, and Singleton (2013). The model is applied to yield curve data from Belgium, France, Germany, Italy, and Spain over the period 2005-2013. Overall, our results show that economic fundamentals are the dominant drivers behind sovereign bond spreads. Nevertheless, shocks unrelated to the fundamental component of the spread have played an important role in the dynamics of bond spreads since the intensification of the sovereign debt crisis in the summer of 2011

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simplicity in design and minimal floor space requirements render the hydrocyclone the preferred classifier in mineral processing plants. Empirical models have been developed for design and process optimisation but due to the complexity of the flow behaviour in the hydrocyclone these do not provide information on the internal separation mechanisms. To study the interaction of design variables, the flow behaviour needs to be considered, especially when modelling the new three-product cyclone. Computational fluid dynamics (CFD) was used to model the three-product cyclone, in particular the influence of the dual vortex finder arrangement on flow behaviour. From experimental work performed on the UG2 platinum ore, significant differences in the classification performance of the three-product cyclone were noticed with variations in the inner vortex finder length. Because of this simulations were performed for a range of inner vortex finder lengths. Simulations were also conducted on a conventional hydrocyclone of the same size to enable a direct comparison of the flow behaviour between the two cyclone designs. Significantly, high velocities were observed for the three-product cyclone with an inner vortex finder extended deep into the conical section of the cyclone. CFD studies revealed that in the three-product cyclone, a cylindrical shaped air-core is observed similar to conventional hydrocyclones. A constant diameter air-core was observed throughout the inner vortex finder length, while no air-core was present in the annulus. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The global market has become increasingly dynamic, unpredictable and customer-driven. This has led to rising rates of new product introduction and turbulent demand patterns across product mixes. As a result, manufacturing enterprises were facing mounting challenges to be agile and responsive to cope with market changes, so as to achieve the competitiveness of producing and delivering products to the market timely and cost-effectively. This paper introduces a currency-based iterative agent bidding mechanism to effectively and cost-efficiently integrate the activities associated with production planning and control, so as to achieve an optimised process plan and schedule. The aim is to enhance the agility of manufacturing systems to accommodate dynamic changes in the market and production. The iterative bidding mechanism is executed based on currency-like metrics; each operation to be performed is assigned with a virtual currency value and agents bid for the operation if they make a virtual profit based on this value. These currency values are optimised iteratively and so does the bidding process based on new sets of values. This is aimed at obtaining better and better production plans, leading to near-optimality. A genetic algorithm is proposed to optimise the currency values at each iteration. In this paper, the implementation of the mechanism and the test case simulation results are also discussed. © 2012 Elsevier Ltd. All rights reserved.