846 resultados para LIFETIME DATA


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Bruneau–Jarbidge eruptive center of the central Snake River Plain in southern Idaho, USA produced multiple rhyolite lava flows with volumes of <10 km³ to 200 km³ each from ~11.2 to 8.1 Ma, most of which follow its climactic phase of large-volume explosive volcanism, represented by the Cougar Point Tuff, from 12.7 to 10.5 Ma. These lavas represent the waning stages of silicic volcanism at a major eruptive center of the Yellowstone hotspot track. Here we provide pyroxene compositions and thermometry results from several lavas that demonstrate that the demise of the silicic volcanic system was characterized by sustained, high pre-eruptive magma temperatures (mostly ≥950 °C) prior to the onset of exclusively basaltic volcanism at the eruptive center. Pyroxenes display a variety of textures in single samples, including solitary euhedral crystals as well as glomerocrysts, crystal clots and annealed microgranular inclusions of pyroxene ±magnetite± plagioclase. Pigeonite and augite crystals are unzoned, and there are no detectable differences in major and minor element compositions according to textural variety — mineral compositions in the microgranular inclusions and crystal clots are identical to those of phenocrysts in the host lavas. In contrast to members of the preceding Cougar Point Tuff that host polymodal glass and mineral populations, pyroxene compositions in each of the lavas are characterized by single rather than multiple discrete compositional modes. Collectively, the lavas reproduce and extend the range of Fe–Mg pyroxene compositional modes observed in the Cougar Point Tuff to more Mg-rich varieties. The compositionally homogeneous populations of pyroxene in each of the lavas, as well as the lack of core-to-rim zonation in individual crystals suggest that individual eruptions each were fed by compositionally homogeneous magma reservoirs, and similarities with the Cougar Point Tuff suggest consanguinity of such reservoirs to those that supplied the polymodal Cougar Point Tuff. Pyroxene thermometry results obtained using QUILF equilibria yield pre-eruptive magma temperatures of 905 to 980 °C, and individual modes consistently record higher Ca content and higher temperatures than pyroxenes with equivalent Fe–Mg ratios in the preceding Cougar Point Tuff. As is the case with the Cougar Point Tuff, evidence for up-temperature zonation within single crystals that would be consistent with recycling of sub- or near-solidus material from antecedent magma reservoirs by rapid reheating is extremely rare. Also, the absence of intra-crystal zonation, particularly at crystal rims, is not easily reconciled with cannibalization of caldera fill that subsided into pre-eruptive reservoirs. The textural, compositional and thermometric results rather are consistent with minor re-equilibration to higher temperatures of the unerupted crystalline residue from the explosive phase of volcanism, or perhaps with newly generated magmas from source materials very similar to those for the Cougar Point Tuff. Collectively, the data suggest that most of the pyroxene compositional diversity that is represented by the tuffs and lavas was produced early in the history of the eruptive center and that compositions across this range were preserved or duplicated through much of its lifetime. Mineral compositions and thermometry of the multiple lavas suggest that unerupted magmas residual to the explosive phase of volcanism may have been stored at sustained, high temperatures subsequent to the explosive phase of volcanism. If so, such persistent high temperatures and large eruptive magma volumes likewise require an abundant and persistent supply of basalt magmas to the lower and/or mid-crust, consistent with the tectonic setting of a continental hotspot.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

National pride is both an important and understudied topic with respect to economic behaviour, hence this thesis investigates whether: 1) there is a "light" side of national pride through increased compliance, and a "dark" side linked to exclusion; 2) successful priming of national pride is linked to increased tax compliance; and 3) East German post-reunification outmigration is related to loyalty. The project comprises three related empirical studies, analysing evidence from a large, aggregated, international survey dataset; a tax compliance laboratory experiment combining psychological priming with measurement of heart rate variability; and data collected after the fall of the Berlin Wall (a situation approximating a natural experiment).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on development of efficient passivation materials for high performance and stable quantum dot sensitized solar cells (QDSCs) is highly important. While ZnS is one of the most widely used passivation material in QDSCs, an alternative material based on ZnSe which was deposited on CdS/CdSe/TiO2 photoanode to form a semi-core/shell structure has been found to be more efficient in terms of reducing electron recombination in QDSCs in this work. It has been found that the solar cell efficiency was improved from 1.86% for ZnSe0 (without coating) to 3.99% using 2 layers of ZnSe coating (ZnSe2) deposited by successive ionic layer adsorption and reaction (SILAR) method. The short circuit current density (Jsc) increased nearly 1-fold (from 7.25 mA/cm2 to13.4 mA/cm2), and the open circuit voltage (Voc) was enhanced by 100 mV using ZnSe2 passivation layer compared to ZnSe0. Studies on the light harvesting efficiency (ηLHE) and the absorbed photon-to-current conversion efficiency (APCE) have revealed that the ZnSe coating layer caused the enhanced ηLHE at wavelength beyond 500 nm and a significant increase of the APCE over the spectrum 400−550 nm. A nearly 100% APCE was obtained with ZnSe2, indicating the excellent charge injection and collection process in the device. The investigation on charge transport and recombination of the device has indicated that the enhanced electron collection efficiency and reduced electron recombination should be responsible for the improved Jsc and Voc of the QDSCs. The effective electron lifetime of the device with ZnSe2 was nearly 6 times higher than ZnSe0 while the electron diffusion coefficient was largely unaffected by the coating. Study on the regeneration of QDs after photoinduced excitation has indicated that the hole transport from QDs to the reduced species (S2−) in electrolyte was very efficient even when the QDs were coated with a thick ZnSe shell (three layers). For comparison, ZnS coated CdS/CdSe sensitized solar cell with optimum shell thickness was also fabricated, which generated a lower energy conversion efficiency (η = 3.43%) than the ZnSe based QDSC counterpart due to a lower Voc and FF. This study suggests that ZnSe may be a more efficient passivation layer than ZnS, which is attributed to the type II energy band alignment of the core (CdS/CdSe quantum dots) and passivation shell (ZnSe) structure, leading to more efficient electron−hole separation and slower electron recombination.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since 2006, we have been conducting urban informatics research that we define as “the study, design, and practice of urban experiences across different urban contexts that are created by new opportunities of real-time, ubiquitous technology and the augmentation that mediates the physical and digital layers of people networks and urban infrastructures” [1]. Various new research initiatives under the label “urban informatics” have been started since then by universities (e.g., NYU’s Center for Urban Science and Progress) and industry (e.g., Arup, McKinsey) worldwide. Yet, many of these new initiatives are limited to what Townsend calls, “data-driven approaches to urban improvement” [2]. One of the key challenges is that any quantity of aggregated data does not easily translate directly into quality insights to better understand cities. In this talk, I will raise questions about the purpose of urban informatics research beyond data, and show examples of media architecture, participatory city making, and citizen activism. I argue for (1) broadening the disciplinary foundations that urban science approaches draw on; (2) maintaining a hybrid perspective that considers both the bird’s eye view as well as the citizen’s view, and; (3) employing design research to not be limited to just understanding, but to bring about actionable knowledge that will drive change for good.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big data analysis in healthcare sector is still in its early stages when comparing with that of other business sectors due to numerous reasons. Accommodating the volume, velocity and variety of healthcare data Identifying platforms that examine data from multiple sources, such as clinical records, genomic data, financial systems, and administrative systems Electronic Health Record (EHR) is a key information resource for big data analysis and is also composed of varied co-created values. Successful integration and crossing of different subfields of healthcare data such as biomedical informatics and health informatics could lead to huge improvement for the end users of the health care system, i.e. the patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Huge amount of data are generated from a variety of information sources in healthcare while the data sources originate from a veracity of clinical information systems and corporate data warehouses. The data derived from the above data sources are used for analysis and trending purposes thus playing an influential role as a real time decision-making tool. The unstructured, narrative data provided by these data sources qualify as healthcare big-data and researchers argue that the application of big-data in healthcare might enable the accountability and efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed systems are widely used for solving large-scale and data-intensive computing problems, including all-to-all comparison (ATAC) problems. However, when used for ATAC problems, existing computational frameworks such as Hadoop focus on load balancing for allocating comparison tasks, without careful consideration of data distribution and storage usage. While Hadoop-based solutions provide users with simplicity of implementation, their inherent MapReduce computing pattern does not match the ATAC pattern. This leads to load imbalances and poor data locality when Hadoop's data distribution strategy is used for ATAC problems. Here we present a data distribution strategy which considers data locality, load balancing and storage savings for ATAC computing problems in homogeneous distributed systems. A simulated annealing algorithm is developed for data distribution and task scheduling. Experimental results show a significant performance improvement for our approach over Hadoop-based solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase in data center dependent services has made energy optimization of data centers one of the most exigent challenges in today's Information Age. The necessity of green and energy-efficient measures is very high for reducing carbon footprint and exorbitant energy costs. However, inefficient application management of data centers results in high energy consumption and low resource utilization efficiency. Unfortunately, in most cases, deploying an energy-efficient application management solution inevitably degrades the resource utilization efficiency of the data centers. To address this problem, a Penalty-based Genetic Algorithm (GA) is presented in this paper to solve a defined profile-based application assignment problem whilst maintaining a trade-off between the power consumption performance and resource utilization performance. Case studies show that the penalty-based GA is highly scalable and provides 16% to 32% better solutions than a greedy algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reductions in DNA integrity, genome stability, and telomere length are strongly associated with the aging process, age-related diseases as well as the age-related loss of muscle mass. However, in people reaching an age far beyond their statistical life expectancy the prevalence of diseases, such as cancer, cardiovascular disease, diabetes or dementia, is much lower compared to “averagely” aged humans. These inverse observations in nonagenarians (90–99 years), centenarians (100–109 years) and super-centenarians (110 years and older) require a closer look into dynamics underlying DNA damage within the oldest old of our society. Available data indicate improved DNA repair and antioxidant defense mechanisms in “super old” humans, which are comparable with much younger cohorts. Partly as a result of these enhanced endogenous repair and protective mechanisms, the oldest old humans appear to cope better with risk factors for DNA damage over their lifetime compared to subjects whose lifespan coincides with the statistical life expectancy. This model is supported by study results demonstrating superior chromosomal stability, telomere dynamics and DNA integrity in “successful agers”. There is also compelling evidence suggesting that life-style related factors including regular physical activity, a well-balanced diet and minimized psycho-social stress can reduce DNA damage and improve chromosomal stability. The most conclusive picture that emerges from reviewing the literature is that reaching “super old” age appears to be primarily determined by hereditary/genetic factors, while a healthy lifestyle additionally contributes to achieving the individual maximum lifespan in humans. More research is required in this rapidly growing population of super old people. In particular, there is need for more comprehensive investigations including short- and long-term lifestyle interventions as well as investigations focusing on the mechanisms causing DNA damage, mutations, and telomere shortening.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although live VM migration has been intensively studied, the problem of live migration of multiple interdependent VMs has hardly been investigated. The most important problem in the live migration of multiple interdependent VMs is how to schedule VM migrations as the schedule will directly affect the total migration time and the total downtime of those VMs. Aiming at minimizing both the total migration time and the total downtime simultaneously, this paper presents a Strength Pareto Evolutionary Algorithm 2 (SPEA2) for the multi-VM migration scheduling problem. The SPEA2 has been evaluated by experiments, and the experimental results show that the SPEA2 can generate a set of VM migration schedules with a shorter total migration time and a shorter total downtime than an existing genetic algorithm, namely Random Key Genetic Algorithm (RKGA). This paper also studies the scalability of the SPEA2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of big data has already outperformed traditional data management efforts in almost all industries. Other instances it has succeeded in obtaining promising results that provide value from large-scale integration and analysis of heterogeneous data sources for example Genomic and proteomic information. Big data analytics have become increasingly important in describing the data sets and analytical techniques in software applications that are so large and complex due to its significant advantages including better business decisions, cost reduction and delivery of new product and services [1]. In a similar context, the health community has experienced not only more complex and large data content, but also information systems that contain a large number of data sources with interrelated and interconnected data attributes. That have resulted in challenging, and highly dynamic environments leading to creation of big data with its enumerate complexities, for instant sharing of information with the expected security requirements of stakeholders. When comparing big data analysis with other sectors, the health sector is still in its early stages. Key challenges include accommodating the volume, velocity and variety of healthcare data with the current deluge of exponential growth. Given the complexity of big data, it is understood that while data storage and accessibility are technically manageable, the implementation of Information Accountability measures to healthcare big data might be a practical solution in support of information security, privacy and traceability measures. Transparency is one important measure that can demonstrate integrity which is a vital factor in the healthcare service. Clarity about performance expectations is considered to be another Information Accountability measure which is necessary to avoid data ambiguity and controversy about interpretation and finally, liability [2]. According to current studies [3] Electronic Health Records (EHR) are key information resources for big data analysis and is also composed of varied co-created values [3]. Common healthcare information originates from and is used by different actors and groups that facilitate understanding of the relationship for other data sources. Consequently, healthcare services often serve as an integrated service bundle. Although a critical requirement in healthcare services and analytics, it is difficult to find a comprehensive set of guidelines to adopt EHR to fulfil the big data analysis requirements. Therefore as a remedy, this research work focus on a systematic approach containing comprehensive guidelines with the accurate data that must be provided to apply and evaluate big data analysis until the necessary decision making requirements are fulfilled to improve quality of healthcare services. Hence, we believe that this approach would subsequently improve quality of life.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the ever increasing amount of eHealth data available from various eHealth systems and sources, Health Big Data Analytics promises enticing benefits such as enabling the discovery of new treatment options and improved decision making. However, concerns over the privacy of information have hindered the aggregation of this information. To address these concerns, we propose the use of Information Accountability protocols to provide patients with the ability to decide how and when their data can be shared and aggregated for use in big data research. In this paper, we discuss the issues surrounding Health Big Data Analytics and propose a consent-based model to address privacy concerns to aid in achieving the promised benefits of Big Data in eHealth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Concerns over the security and privacy of patient information are one of the biggest hindrances to sharing health information and the wide adoption of eHealth systems. At present, there are competing requirements between healthcare consumers' (i.e. patients) requirements and healthcare professionals' (HCP) requirements. While consumers want control over their information, healthcare professionals want access to as much information as required in order to make well-informed decisions and provide quality care. In order to balance these requirements, the use of an Information Accountability Framework devised for eHealth systems has been proposed. In this paper, we take a step closer to the adoption of the Information Accountability protocols and demonstrate their functionality through an implementation in FluxMED, a customisable EHR system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Though difficult, the study of gene-environment interactions in multifactorial diseases is crucial for interpreting the relevance of non-heritable factors and prevents from overlooking genetic associations with small but measurable effects. We propose a "candidate interactome" (i.e. a group of genes whose products are known to physically interact with environmental factors that may be relevant for disease pathogenesis) analysis of genome-wide association data in multiple sclerosis. We looked for statistical enrichment of associations among interactomes that, at the current state of knowledge, may be representative of gene-environment interactions of potential, uncertain or unlikely relevance for multiple sclerosis pathogenesis: Epstein-Barr virus, human immunodeficiency virus, hepatitis B virus, hepatitis C virus, cytomegalovirus, HHV8-Kaposi sarcoma, H1N1-influenza, JC virus, human innate immunity interactome for type I interferon, autoimmune regulator, vitamin D receptor, aryl hydrocarbon receptor and a panel of proteins targeted by 70 innate immune-modulating viral open reading frames from 30 viral species. Interactomes were either obtained from the literature or were manually curated. The P values of all single nucleotide polymorphism mapping to a given interactome were obtained from the last genome-wide association study of the International Multiple Sclerosis Genetics Consortium & the Wellcome Trust Case Control Consortium, 2. The interaction between genotype and Epstein Barr virus emerges as relevant for multiple sclerosis etiology. However, in line with recent data on the coexistence of common and unique strategies used by viruses to perturb the human molecular system, also other viruses have a similar potential, though probably less relevant in epidemiological terms. © 2013 Mechelli et al.