958 resultados para Big Three Banks
Resumo:
Huge amount of data are generated from a variety of information sources in healthcare while the data sources originate from a veracity of clinical information systems and corporate data warehouses. The data derived from the above data sources are used for analysis and trending purposes thus playing an influential role as a real time decision-making tool. The unstructured, narrative data provided by these data sources qualify as healthcare big-data and researchers argue that the application of big-data in healthcare might enable the accountability and efficiency.
Resumo:
The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.
Resumo:
With the ever increasing amount of eHealth data available from various eHealth systems and sources, Health Big Data Analytics promises enticing benefits such as enabling the discovery of new treatment options and improved decision making. However, concerns over the privacy of information have hindered the aggregation of this information. To address these concerns, we propose the use of Information Accountability protocols to provide patients with the ability to decide how and when their data can be shared and aggregated for use in big data research. In this paper, we discuss the issues surrounding Health Big Data Analytics and propose a consent-based model to address privacy concerns to aid in achieving the promised benefits of Big Data in eHealth.
Resumo:
The Australian Curriculum identified seven General Capabilities, including numeracy, to be embedded in all learning areas. However, it has been left to individual schools to manage this. Whilst there is a growing body of literature about pedagogies that embed numeracy in various learning areas, there are few studies from the management perspective. A social constructivist perspective and a multiple case study approach were used to explore the actions of school managers and mathematics teachers in three Queensland secondary schools, in order to investigate how they meet the Australian Curriculum requirement to embed numeracy throughout the curriculum. The study found a lack of coordinated cross-curricular approaches to numeracy in any of the schools studied. It illustrates the difficulties that arise when teachers do not share the Australian Curriculum cross-curricular vision of numeracy. Schools and curriculum authorities have not acknowledged the challenges for teachers in implementing cross-curricular numeracy, which include: limited understanding of numeracy; a lack of commitment; and inadequate skills. Successful embedding of numeracy in all learning areas requires: the commitment and support of school leaders, a review of school curriculum documents and pedagogical practices, professional development of teachers, and adequate funding to support these activities.
Resumo:
Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, heterogeneity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers near equivalent answers compared with analyses of the full dataset under a controlled error rate. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally, it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.
Resumo:
The structures of two hydrated salts of 4-aminophenylarsonic acid (p-arsanilic acid), namely ammonium 4-aminophenylarsonate monohydrate, NH4(+)·C6H7AsNO3(-)·H2O, (I), and the one-dimensional coordination polymer catena-poly[[(4-aminophenylarsonato-κO)diaquasodium]-μ-aqua], [Na(C6H7AsNO3)(H2O)3]n, (II), have been determined. In the structure of the ammonium salt, (I), the ammonium cations, arsonate anions and water molecules interact through inter-species N-H...O and arsonate and water O-H...O hydrogen bonds, giving the common two-dimensional layers lying parallel to (010). These layers are extended into three dimensions through bridging hydrogen-bonding interactions involving the para-amine group acting both as a donor and an acceptor. In the structure of the sodium salt, (II), the Na(+) cation is coordinated by five O-atom donors, one from a single monodentate arsonate ligand, two from monodentate water molecules and two from bridging water molecules, giving a very distorted square-pyramidal coordination environment. The water bridges generate one-dimensional chains extending along c and extensive interchain O-H...O and N-H...O hydrogen-bonding interactions link these chains, giving an overall three-dimensional structure. The two structures reported here are the first reported examples of salts of p-arsanilic acid.
Resumo:
Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.
Resumo:
The “third-generation” 3D graphene structures, T-junction graphene micro-wells (T-GMWs) are produced on cheap polycrystalline Cu foils in a single-step, low-temperature (270 °C), energy-efficient, and environment-friendly dry plasma-enabled process. T-GMWs comprise vertical graphene (VG) petal-like sheets that seemlessly integrate with each other and the underlying horizontal graphene sheets by forming T-junctions. The microwells have the pico-to-femto-liter storage capacity and precipitate compartmentalized PBS crystals. The T-GMW films are transferred from the Cu substrates, without damage to the both, in de-ionized or tap water, at room temperature, and without commonly used sacrificial materials or hazardous chemicals. The Cu substrates are then re-used to produce similar-quality T-GMWs after a simple plasma conditioning. The isolated T-GMW films are transferred to diverse substrates and devices and show remarkable recovery of their electrical, optical, and hazardous NO2 gas sensing properties upon repeated bending (down to 1 mm radius) and release of flexible trasparent display plastic substrates. The plasma-enabled mechanism of T-GMW isolation in water is proposed and supported by the Cu plasma surface modification analysis. Our GMWs are suitable for various optoelectronic, sesning, energy, and biomedical applications while the growth approach is potentially scalable for future pilot-scale industrial production.
Resumo:
To study the relation between temperature and mortality by estimating the temperature-related mortality in Beijing, Shanghai, and Guangzhou. METHODS: Data of daily mortality, weather and air pollution in the three cities were collected. A distributed lag nonlinear model was established and used in analyzing the effects of temperature on mortality. Current and future net temperature-related mortality was estimated. RESULTS: The association between temperature and mortality was J-shaped, with an increased death risk of both hot and cold temperature in these cities. The effects of cold temperature on health lasted longer than those of hot temperature. The projected temperature-related mortality increased with the decreased cold-related mortality. The mortality was higher in Guangzhou than in Beijing and Shanghai. CONCLUSION: The impact of temperature on health varies in the 3 cities of China, which may have implications for climate policy making in China.
Resumo:
Flow patterns and aerodynamic characteristics behind three side-by-side square cylinders has been found depending upon the unequal gap spacing (g1 = s1/d and g2 = s2/d) between the three cylinders and the Reynolds number (Re) using the Lattice Boltzmann method. The effect of Reynolds numbers on the flow behind three cylinders are numerically studied for 75 ≤ Re ≤ 175 and chosen unequal gap spacings such as (g1, g2) = (1.5, 1), (3, 4) and (7, 6). We also investigate the effect of g2 while keeping g1 fixed for Re = 150. It is found that a Reynolds number have a strong effect on the flow at small unequal gap spacing (g1, g2) = (1.5, 1.0). It is also found that the secondary cylinder interaction frequency significantly contributes for unequal gap spacing for all chosen Reynolds numbers. It is observed that at intermediate unequal gap spacing (g1, g2) = (3, 4) the primary vortex shedding frequency plays a major role and the effect of secondary cylinder interaction frequencies almost disappear. Some vortices merge near the exit and as a result small modulation found in drag and lift coefficients. This means that with the increase in the Reynolds numbers and unequal gap spacing shows weakens wakes interaction between the cylinders. At large unequal gap spacing (g1, g2) = (7, 6) the flow is fully periodic and no small modulation found in drag and lift coefficients signals. It is found that the jet flows for unequal gap spacing strongly influenced the wake interaction by varying the Reynolds number. These unequal gap spacing separate wake patterns for different Reynolds numbers: flip-flopping, in-phase and anti-phase modulation synchronized, in-phase and anti-phase synchronized. It is also observed that in case of equal gap spacing between the cylinders the effect of gap spacing is stronger than the Reynolds number. On the other hand, in case of unequal gap spacing between the cylinders the wake patterns strongly depends on both unequal gap spacing and Reynolds number. The vorticity contour visualization, time history analysis of drag and lift coefficients, power spectrum analysis of lift coefficient and force statistics are systematically discussed for all chosen unequal gap spacings and Reynolds numbers to fully understand this valuable and practical problem.
Resumo:
A simple, inexpensive and sensitive kinetic spectrophotometric method was developed for the simultaneous determination of three anti-carcinogenic flavonoids: catechin, quercetin and naringenin, in fruit samples. A yellow chelate product was produced in the presence neocuproine and Cu(I) – a reduction product of the reaction between the flavonoids with Cu(II), and this enabled the quantitative measurements with UV–vis spectrophotometry. The overlapping spectra obtained, were resolved with chemometrics calibration models, and the best performing method was the fast independent component analysis (fast-ICA/PCR (Principal component regression)); the limits of detection were 0.075, 0.057 and 0.063 mg L−1 for catechin, quercetin and naringenin, respectively. The novel method was found to outperform significantly the common HPLC procedure.
Resumo:
Network topology and routing are two important factors in determining the communication costs of big data applications at large scale. As for a given Cluster, Cloud, or Grid system, the network topology is fixed and static or dynamic routing protocols are preinstalled to direct the network traffic. Users cannot change them once the system is deployed. Hence, it is hard for application developers to identify the optimal network topology and routing algorithm for their applications with distinct communication patterns. In this study, we design a CCG virtual system (CCGVS), which first uses container-based virtualization to allow users to create a farm of lightweight virtual machines on a single host. Then, it uses software-defined networking (SDN) technique to control the network traffic among these virtual machines. Users can change the network topology and control the network traffic programmingly, thereby enabling application developers to evaluate their applications on the same system with different network topologies and routing algorithms. The preliminary experimental results through both synthetic big data programs and NPB benchmarks have shown that CCGVS can represent application performance variations caused by network topology and routing algorithm.
Resumo:
Polybrominated diphenyl ethers (PBDEs) are a class of brominated flame retardants (BFRs) once extensively used in the plastics of a wide range of consumer products. The listing of certain congeners that are constituents of commercial PBDE mixtures (including c-octaBDE) in the Stockholm Convention and tightening regulation of many other BFRs in recent years have created the need for a rapid and effective method of identifying BFR-containing plastics. A three-tiered testing strategy comparing results from non-destructive testing (X-ray fluorescence (XRF)) (n = 1714), a surface wipe test (n = 137) and destructive chemical analysis (n = 48) was undertaken to systematically identify BFRs in a wide range of consumer products. XRF rapidly identified bromine in 92% of products later confirmed to contain BFRs. Surface wipes of products identified tetrabromobisphenol A (TBBPA), c-octaBDE congeners and BDE-209 with relatively high accuracy (> 75%) when confirmed by destructive chemical analysis. A relationship between the amounts of BFRs detected in surface wipes and subsequent destructive testing shows promise in predicting not only the types of BFRs present but also estimating the concentrations present. Information about the types of products that may contain persistent BFRs will assist regulators in implementing policies to further reduce the occurrence of these chemicals in consumer products.
Resumo:
Ulcerative colitis is a common form of inflammatory bowel disease with a complex etiology. As part of the Wellcome Trust Case Control Consortium 2, we performed a genome-wide association scan for ulcerative colitis in 2,361 cases and 5,417 controls. Loci showing evidence of association at P 1 × 10 5 were followed up by genotyping in an independent set of 2,321 cases and 4,818 controls. We find genome-wide significant evidence of association at three new loci, each containing at least one biologically relevant candidate gene, on chromosomes 20q13 (HNF4A; P = 3.2 × 10 17), 16q22 (CDH1 and CDH3; P = 2.8 × 10 8) and 7q31 (LAMB1; P = 3.0 × 10 8). Of note, CDH1 has recently been associated with susceptibility to colorectal cancer, an established complication of longstanding ulcerative colitis. The new associations suggest that changes in the integrity of the intestinal epithelial barrier may contribute to the pathogenesis of ulcerative colitis. © 2009 Nature America, Inc. All rights reserved.