923 resultados para Data Systems
Resumo:
This paper discusses how global financial institutions are using big data analytics within their compliance operations. A lot of previous research has focused on the strategic implications of big data, but not much research has considered how such tools are entwined with regulatory breaches and investigations in financial services. Our work covers two in-depth qualitative case studies, each addressing a distinct type of analytics. The first case focuses on analytics which manage everyday compliance breaches and so are expected by managers. The second case focuses on analytics which facilitate investigation and litigation where serious unexpected breaches may have occurred. In doing so, the study focuses on the micro/data to understand how these tools are influencing operational risks and practices. The paper draws from two bodies of literature, the social studies of information systems and finance to guide our analysis and practitioner recommendations. The cases illustrate how technologies are implicated in multijurisdictional challenges and regulatory conflicts at each end of the operational risk spectrum. We find that compliance analytics are both shaping and reporting regulatory matters yet often firms may have difficulties in recruiting individuals with relevant but diverse skill sets. The cases also underscore the increasing need for financial organizations to adopt robust information governance policies and processes to ease future remediation efforts.
Resumo:
Understanding complex social-ecological systems, and anticipating how they may respond to rapid change, requires an approach that incorporates environmental, social, economic, and policy factors, usually in a context of fragmented data availability. We employed fuzzy cognitive mapping (FCM) to integrate these factors in the assessment of future wildfire risk in the Chiquitania region, Bolivia. In this region, dealing with wildfires is becoming increasingly challenging due to reinforcing feedbacks between multiple drivers. We conducted semi-structured interviews and constructed different FCMs in focus groups to understand the regional dynamics of wildfire from diverse perspectives. We used FCM modelling to evaluate possible adaptation scenarios in the context of future drier climatic conditions. Scenarios also considered possible failure to respond in time to the emergent risk. This approach proved of great potential to support decision-making for risk management. It helped identify key forcing variables and generate insights into potential risks and trade-offs of different strategies. All scenarios showed increased wildfire risk in the event of more droughts. The ‘Hands-off’ scenario resulted in amplified impacts driven by intensifying trends, affecting particularly the agricultural production. The ‘Fire management’ scenario, which adopted a bottom-up approach to improve controlled burning, showed less trade-offs between wildfire risk reduction and production compared to the ‘Fire suppression’ scenario. Findings highlighted the importance of considering strategies that involve all actors who use fire, and the need to nest these strategies for a more systemic approach to manage wildfire risk. The FCM model could be used as a decision-support tool and serve as a ‘boundary object’ to facilitate collaboration and integration of different forms of knowledge and perceptions of fire in the region. This approach has also the potential to support decisions in other dynamic frontier landscapes around the world that are facing increased risk of large wildfires.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Resumo:
The stratospheric mean-meridional circulation (MMC) and eddy mixing are compared among six meteorological reanalysis data sets: NCEP-NCAR, NCEP-CFSR, ERA-40, ERA-Interim, JRA-25, and JRA-55 for the period 1979–2012. The reanalysis data sets produced using advanced systems (i.e., NCEP-CFSR, ERA-Interim, and JRA-55) generally reveal a weaker MMC in the Northern Hemisphere (NH) compared with those produced using older systems (i.e., NCEP/NCAR, ERA-40, and JRA-25). The mean mixing strength differs largely among the data products. In the NH lower stratosphere, the contribution of planetary-scale mixing is larger in the new data sets than in the old data sets, whereas that of small-scale mixing is weaker in the new data sets. Conventional data assimilation techniques introduce analysis increments without maintaining physical balance, which may have caused an overly strong MMC and spurious small-scale eddies in the old data sets. At the NH mid-latitudes, only ERA-Interim reveals a weakening MMC trend in the deep branch of the Brewer–Dobson circulation (BDC). The relative importance of the eddy mixing compared with the mean-meridional transport in the subtropical lower stratosphere shows increasing trends in ERA-Interim and JRA-55; this together with the weakened MMC in the deep branch may imply an increasing age-of-air (AoA) in the NH middle stratosphere in ERA-Interim. Overall, discrepancies between the different variables and trends therein as derived from the different reanalyses are still relatively large, suggesting that more investments in these products are needed in order to obtain a consolidated picture of observed changes in the BDC and the mechanisms that drive them.
Resumo:
The progress in wearable and implanted health monitoring technologies has strong potential to alter the future of healthcare services by enabling ubiquitous monitoring of patients. A typical health monitoring system consists of a network of wearable or implanted sensors that constantly monitor physiological parameters. Collected data are relayed using existing wireless communication protocols to the base station for additional processing. This article provides researchers with information to compare the existing low-power communication technologies that can potentially support the rapid development and deployment of WBAN systems, and mainly focuses on remote monitoring of elderly or chronically ill patients in residential environments.
Resumo:
Deacidification of vegetable oils can be performed using liquid-liquid extraction as an alternative method to the classical chemical and physical refining processes. This paper reports experimental data for systems containing refined babassu oil, lauric acid, ethanol, and water at 303.2 K with different water mass fractions in the alcoholic solvent (0, 0.0557, 0.1045, 0.2029, and 0.2972). The dilution of solvent with water reduced the distribution coefficient values, which indicates a reduction in the loss of neutral oil. The experimental data were used to adjust the NRTL equation parameters. The global deviation between the observed and the estimated compositions was 0.0085, indicating that the model can accurately predict the behavior of the compounds at different levels of solvent hydration. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Soybean oil can be deacidified by liquid-liquid extraction with ethanol. In the present paper, the liquid-liquid equilibria of systems composed of refined soybean oil, commercial linoleic acid, ethanol and water were investigated at 298.2 K. The experimental data set obtained from the present study (at 298.2 K) and the results of Mohsen-Nia et al. [1] (at 303.2 K) and Rodrigues et al. [2] (at 323.2 K) were correlated by applying the non-random two liquid (NRTL) model. The results of the present study indicated that the mutual solubility of the compounds decreased with an increase in the water content of the solvent and a decrease in the temperature of the solution. Among variables, the water content of the solvent had the strongest effect on the solubility of the components. The maximum deviation and average variance between the experimental and calculated compositions were 1.60% and 0.89%, indicating that the model could accurately predict the behavior of the compounds at different temperatures and degrees of hydration. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The present paper reports phase equilibrium experimental data for two systems composed by peanut oil or avocado seed oil + commercial oleic acid + ethanol + water at 298.2 K and different water contents in the solvent. The addition of water to the solvent reduces the loss of neutral oil in the alcoholic phase and improves the solvent selectivity. The experimental data were correlated by the NRTL and UNIQUAC models. The global deviations between calculated and experimental values were 0.63 % and 1.08 %, respectively, for the systems containing avocado seed oil. In the case of systems containing peanut oil those deviations were 0.65 % and 0.98 %, respectively. Such results indicate that both models were able to reproduce correctly the experimental data, although the NRTL model presented a better performance.
Resumo:
In this work, thermodynamic models for fitting the phase equilibrium of binary systems were applied, aiming to predict the high pressure phase equilibrium of multicomponent systems of interest in the food engineering field, comparing the results generated by the models with new experimental data and with those from the literature. Two mixing rules were used with the Peng-Robinson equation of state, one with the mixing rule of van der Waals and the other with the composition-dependent mixing rule of Mathias et al. The systems chosen are of fundamental importance in food industries, such as the binary systems CO(2)-limonene, CO(2)-citral and CO(2)-linalool, and the ternary systems CO(2)-Limonene-Citral and CO(2)-Limonene-Linalool, where high pressure phase equilibrium knowledge is important to extract and fractionate citrus fruit essential oils. For the CO(2)-limonene system, some experimental data were also measured in this work. The results showed the high capability of the model using the composition-dependent mixing rule to model the phase equilibrium behavior of these systems.
Resumo:
We perform a statistical study of the process of orbital determination of the HD82943 extrasolar planetary system, using the current observational data set of N = 165 radial velocity (RV) measurements. Our aim is to analyse the dispersion of possible orbital fits leading to residuals compatible with the best solution, and to discuss the sensitivity of the results with respect to both the data set and the error distribution around the best fit. Although some orbital parameters (e.g. semimajor axis) appear well constrained, we show that the best fits for the HD82943 system are not robust, and at present it is not possible to estimate reliable solutions for these bodies. Finally, we discuss the possibility of a third planet, with a mass of 0.35M(Jup) and an orbital period of 900 d. Stability analysis and simulations of planetary migration indicate that such a hypothetical three-planet system could be locked in a double 2/1 mean-motion resonance, similar to the so-called Laplace resonance of the three inner Galilean satellites of Jupiter.
Resumo:
Clusters of galaxies are the most impressive gravitationally-bound systems in the universe, and their abundance (the cluster mass function) is an important statistic to probe the matter density parameter (Omega(m)) and the amplitude of density fluctuations (sigma(8)). The cluster mass function is usually described in terms of the Press-Schecther (PS) formalism where the primordial density fluctuations are assumed to be a Gaussian random field. In previous works we have proposed a non-Gaussian analytical extension of the PS approach with basis on the q-power law distribution (PL) of the nonextensive kinetic theory. In this paper, by applying the PL distribution to fit the observational mass function data from X-ray highest flux-limited sample (HIFLUGCS), we find a strong degeneracy among the cosmic parameters, sigma(8), Omega(m) and the q parameter from the PL distribution. A joint analysis involving recent observations from baryon acoustic oscillation (BAO) peak and Cosmic Microwave Background (CMB) shift parameter is carried out in order to break these degeneracy and better constrain the physically relevant parameters. The present results suggest that the next generation of cluster surveys will be able to probe the quantities of cosmological interest (sigma(8), Omega(m)) and the underlying cluster physics quantified by the q-parameter.
Resumo:
The statement that pairs of individuals from different populations are often more genetically similar than pairs from the same population is a widespread idea inside and outside the scientific community. Witherspoon et al. [""Genetic similarities within and between human populations,"" Genetics 176:351-359 (2007)] proposed an index called the dissimilarity fraction (omega) to access in a quantitative way the validity of this statement for genetic systems. Witherspoon demonstrated that, as the number of loci increases, omega decreases to a point where, when enough sampling is available, the statement is false. In this study, we applied the dissimilarity fraction to Howells`s craniometric database to establish whether or not similar results are obtained for cranial morphological traits. Although in genetic studies thousands of loci are available, Howells`s database provides no more than 55 metric traits, making the contribution of each variable important. To cope with this limitation, we developed a routine that takes this effect into consideration when calculating. omega Contrary to what was observed for the genetic data, our results show that cranial morphology asymptotically approaches a mean omega of 0.3 and therefore supports the initial statement-that is, that individuals from the same geographic region do not form clear and discrete clusters-further questioning the idea of the existence of discrete biological clusters in the human species. Finally, by assuming that cranial morphology is under an additive polygenetic model, we can say that the population history signal of human craniometric traits presents the same resolution as a neutral genetic system dependent on no more than 20 loci.
Resumo:
Introduction: The aim of the present study was to determine the disinfection of preparations carried out by using the Protaper or MTwo system in canals infected with Enterococcus faecalis. Methods: Twenty-eight distobuccal canals of upper molars were used, in which the canals were sterilized after being enlarged to #20 file and then contaminated with an inoculation of a culture of E. faecalis. After the incubation period, bacterial samples were collected and were seeded on plates for analysis of colony-forming units (CFU)/mL. The teeth were divided into 2 groups according to the rotary system used for instrumentation; 2 noninstrumented teeth served as the control group. Then bacterial samples were collected and were seeded on plates for analysis of CFU/mL again. The data obtained were evaluated by the Wilcoxon and Mann-Whitney U tests. Results: Bacterial reduction was 81.94% and 84.29%, respectively, in Pro Taper and Mtwo systems, and there was no statistically significant difference (P > .05). Conclusions: Both systems, Pro Taper and Mtwo, reduced the amount of bacteria in the mechanical disinfection of the root canal system, demonstrating that they are suitable for this purpose. (J Endod 2010;36:1238-1240)
Resumo:
Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper. we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.