850 resultados para Multi-Higgs Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellite altimetry has revolutionized our understanding of ocean dynamics thanks to frequent sampling and global coverage. Nevertheless, coastal data have been flagged as unreliable due to land and calm water interference in the altimeter and radiometer footprint and uncertainty in the modelling of high-frequency tidal and atmospheric forcing. Our study addresses the first issue, i.e. altimeter footprint contamination, via retracking, presenting ALES, the Adaptive Leading Edge Subwaveform retracker. ALES is potentially applicable to all the pulse-limited altimetry missions and its aim is to retrack both open ocean and coastal data with the same accuracy using just one algorithm. ALES selects part of each returned echo and models it with a classic ”open ocean” Brown functional form, by means of least square estimation whose convergence is found through the Nelder-Mead nonlinear optimization technique. By avoiding echoes from bright targets along the trailing edge, it is capable of retrieving more coastal waveforms than the standard processing. By adapting the width of the estimation window according to the significant wave height, it aims at maintaining the accuracy of the standard processing in both the open ocean and the coastal strip. This innovative retracker is validated against tide gauges in the Adriatic Sea and in the Greater Agulhas System for three different missions: Envisat, Jason-1 and Jason-2. Considerations of noise and biases provide a further verification of the strategy. The results show that ALES is able to provide more reliable 20-Hz data for all three missions in areas where even 1-Hz averages are flagged as unreliable in standard products. Application of the ALES retracker led to roughly a half of the analysed tracks showing a marked improvement in correlation with the tide gauge records, with the rms difference being reduced by a factor of 1.5 for Jason-1 and Jason-2 and over 4 for Envisat in the Adriatic Sea (at the closest point to the tide gauge).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: Ecological niche modelling can provide valuable insight into species' environmental preferences and aid the identification of key habitats for populations of conservation concern. Here, we integrate biologging, satellite remote-sensing and ensemble ecological niche models (EENMs) to identify predictable foraging habitats for a globally important population of the grey-headed albatross (GHA) Thalassarche chrysostoma. Location: Bird Island, South Georgia; Southern Atlantic Ocean. Methods: GPS and geolocation-immersion loggers were used to track at-sea movements and activity patterns of GHA over two breeding seasons (n = 55; brood-guard). Immersion frequency (landings per 10-min interval) was used to define foraging events. EENM combining Generalized Additive Models (GAM), MaxEnt, Random Forest (RF) and Boosted Regression Trees (BRT) identified the biophysical conditions characterizing the locations of foraging events, using time-matched oceanographic predictors (Sea Surface Temperature, SST; chlorophyll a, chl-a; thermal front frequency, TFreq; depth). Model performance was assessed through iterative cross-validation and extrapolative performance through cross-validation among years. Results: Predictable foraging habitats identified by EENM spanned neritic (<500 m), shelf break and oceanic waters, coinciding with a set of persistent biophysical conditions characterized by particular thermal ranges (3–8 °C, 12–13 °C), elevated primary productivity (chl-a > 0.5 mg m−3) and frequent manifestation of mesoscale thermal fronts. Our results confirm previous indications that GHA exploit enhanced foraging opportunities associated with frontal systems and objectively identify the APFZ as a region of high foraging habitat suitability. Moreover, at the spatial and temporal scales investigated here, the performance of multi-model ensembles was superior to that of single-algorithm models, and cross-validation among years indicated reasonable extrapolative performance. Main conclusions: EENM techniques are useful for integrating the predictions of several single-algorithm models, reducing potential bias and increasing confidence in predictions. Our analysis highlights the value of EENM for use with movement data in identifying at-sea habitats of wide-ranging marine predators, with clear implications for conservation and management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: Ecological niche modelling can provide valuable insight into species' environmental preferences and aid the identification of key habitats for populations of conservation concern. Here, we integrate biologging, satellite remote-sensing and ensemble ecological niche models (EENMs) to identify predictable foraging habitats for a globally important population of the grey-headed albatross (GHA) Thalassarche chrysostoma. Location: Bird Island, South Georgia; Southern Atlantic Ocean. Methods: GPS and geolocation-immersion loggers were used to track at-sea movements and activity patterns of GHA over two breeding seasons (n = 55; brood-guard). Immersion frequency (landings per 10-min interval) was used to define foraging events. EENM combining Generalized Additive Models (GAM), MaxEnt, Random Forest (RF) and Boosted Regression Trees (BRT) identified the biophysical conditions characterizing the locations of foraging events, using time-matched oceanographic predictors (Sea Surface Temperature, SST; chlorophyll a, chl-a; thermal front frequency, TFreq; depth). Model performance was assessed through iterative cross-validation and extrapolative performance through cross-validation among years. Results: Predictable foraging habitats identified by EENM spanned neritic (<500 m), shelf break and oceanic waters, coinciding with a set of persistent biophysical conditions characterized by particular thermal ranges (3–8 °C, 12–13 °C), elevated primary productivity (chl-a > 0.5 mg m−3) and frequent manifestation of mesoscale thermal fronts. Our results confirm previous indications that GHA exploit enhanced foraging opportunities associated with frontal systems and objectively identify the APFZ as a region of high foraging habitat suitability. Moreover, at the spatial and temporal scales investigated here, the performance of multi-model ensembles was superior to that of single-algorithm models, and cross-validation among years indicated reasonable extrapolative performance. Main conclusions: EENM techniques are useful for integrating the predictions of several single-algorithm models, reducing potential bias and increasing confidence in predictions. Our analysis highlights the value of EENM for use with movement data in identifying at-sea habitats of wide-ranging marine predators, with clear implications for conservation and management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Marine legislation is becoming more complex and marine ecosystem-based management is specified in national and regional legislative frameworks. Shelf-seas community and ecosystem models (hereafter termed ecosystem models) are central to the delivery of ecosystem-based management, but there is limited uptake and use of model products by decision makers in Europe and the UK in comparison with other countries. In this study, the challenges to the uptake and use of ecosystem models in support of marine environmental management are assessed using the UK capability as an example. The UK has a broad capability in marine ecosystem modelling, with at least 14 different models that support management, but few examples exist of ecosystem modelling that underpin policy or management decisions. To improve understanding of policy and management issues that can be addressed using ecosystem models, a workshop was convened that brought together advisors, assessors, biologists, social scientists, economists, modellers, statisticians, policy makers, and funders. Some policy requirements were identified that can be addressed without further model development including: attribution of environmental change to underlying drivers, integration of models and observations to develop more efficient monitoring programmes, assessment of indicator performance for different management goals, and the costs and benefit of legislation. Multi-model ensembles are being developed in cases where many models exist, but model structures are very diverse making a standardised approach of combining outputs a significant challenge, and there is a need for new methodologies for describing, analysing, and visualising uncertainties. A stronger link to social and economic systems is needed to increase the range of policy-related questions that can be addressed. It is also important to improve communication between policy and modelling communities so that there is a shared understanding of the strengths and limitations of ecosystem models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Adaptability to changing circumstances is a key feature of living creatures. Understanding such adaptive processes is central to developing successful autonomous artifacts. In this paper two perspectives are brought to bear on the issue of adaptability. The first is a short term perspective which looks at adaptability in terms of the interactions between the agent and the environment. The second perspective involves a hierarchical evolutionary model which seeks to identify higher-order forms of adaptability based on the concept of adaptive meta-constructs. Task orientated and agent-centered models of adaptive processes in artifacts are considered from these two perspectives. The former isrepresented by the fitness function approach found in evolutionary learning, and the latter in terms of the concepts of empowerment and homeokinesis found in models derived from the self-organizing systems approach. A meta-construct approach to adaptability based on the identification of higher level meta-metrics is also outlined. 2009 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE The appropriate selection of patients for early clinical trials presents a major challenge. Previous analyses focusing on this problem were limited by small size and by interpractice heterogeneity. This study aims to define prognostic factors to guide risk-benefit assessments by using a large patient database from multiple phase I trials. PATIENTS AND METHODS Data were collected from 2,182 eligible patients treated in phase I trials between 2005 and 2007 in 14 European institutions. We derived and validated independent prognostic factors for 90-day mortality by using multivariate logistic regression analysis. Results The 90-day mortality was 16.5% with a drug-related death rate of 0.4%. Trial discontinuation within 3 weeks occurred in 14% of patients primarily because of disease progression. Eight different prognostic variables for 90-day mortality were validated: performance status (PS), albumin, lactate dehydrogenase, alkaline phosphatase, number of metastatic sites, clinical tumor growth rate, lymphocytes, and WBC. Two different models of prognostic scores for 90-day mortality were generated by using these factors, including or excluding PS; both achieved specificities of more than 85% and sensitivities of approximately 50% when using a score cutoff of 5 or higher. These models were not superior to the previously published Royal Marsden Hospital score in their ability to predict 90-day mortality. CONCLUSION Patient selection using any of these prognostic scores will reduce non-drug-related 90-day mortality among patients enrolled in phase I trials by 50%. However, this can be achieved only by an overall reduction in recruitment to phase I studies of 20%, more than half of whom would in fact have survived beyond 90 days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Loss of biodiversity and nutrient enrichment are two of the main human impacts on ecosystems globally, yet we understand very little about the interactive effects of multiple stressors on natural communities and how this relates to biodiversity and ecosystem functioning. Advancing our understanding requires the following: (1) incorporation of processes occurring within and among trophic levels in natural ecosystems and (2) tests of context-dependency of species loss effects. We examined the effects of loss of a key predator and two groups of its prey on algal assemblages at both ambient and enriched nutrient conditions in a marine benthic system and tested for interactions between the loss of functional diversity and nutrient enrichment on ecosystem functioning. We found that enrichment interacted with food web structure to alter the effects of species loss in natural communities. At ambient conditions, the loss of primary consumers led to an increase in biomass of algae, whereas predator loss caused a reduction in algal biomass (i.e. a trophic cascade). However, contrary to expectations, we found that nutrient enrichment negated the cascading effect of predators on algae. Moreover, algal assemblage structure varied in distinct ways in response to mussel loss, grazer loss, predator loss and with nutrient enrichment, with compensatory shifts in algal abundance driven by variation in responses of different algal species to different environmental conditions and the presence of different consumers. We identified and characterized several context-dependent mechanisms driving direct and indirect effects of consumers. Our findings highlight the need to consider environmental context when examining potential species redundancies in particular with regard to changing environmental conditions. Furthermore, non-trophic interactions based on empirical evidence must be incorporated into food web-based ecological models to improve understanding of community responses to global change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute resources to host cloud applications. From an application user's point of view, it is desirable to identify the most appropriate set of available resources on which to execute an application. Resource choice can be complex and may involve comparing available hardware specifications, operating systems, value-added services, such as network configuration or data replication, and operating costs, such as hosting cost and data throughput. Providers' cost models often change and new commodity cost models, such as spot pricing, have been introduced to offer significant savings. In this paper, a software abstraction layer is used to discover infrastructure resources for a particular application, across multiple providers, by using a two-phase constraints-based approach. In the first phase, a set of possible infrastructure resources are identified for a given application. In the second phase, a heuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based heuristic is most appropriate; for others a performance-based heuristic may be used. A financial services application and a high performance computing application are used to illustrate the execution of the proposed resource discovery mechanism. The experimental result shows the proposed model could dynamically select an appropriate set of resouces that match the application's requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A benefit function transfer obtains estimates of willingness-to-pay (WTP) for the evaluation of a given policy at a site by combining existing information from different study sites. This has the advantage that more efficient estimates are obtained, but it relies on the assumption that the heterogeneity between sites is appropriately captured in the benefit transfer model. A more expensive alternative to estimate WTP is to analyze only data from the policy site in question while ignoring information from other sites. We make use of the fact that these two choices can be viewed as a model selection problem and extend the set of models to allow for the hypothesis that the benefit function is only applicable to a subset of sites. We show how Bayesian model averaging (BMA) techniques can be used to optimally combine information from all models. The Bayesian algorithm searches for the set of sites that can form the basis for estimating a benefit function and reveals whether such information can be transferred to new sites for which only a small data set is available. We illustrate the method with a sample of 42 forests from U.K. and Ireland. We find that BMA benefit function transfer produces reliable estimates and can increase about 8 times the information content of a small sample when the forest is 'poolable'. © 2008 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A key pathological feature of late-onset Alzheimer's disease (LOAD) is the abnormal extracellular accumulation of the amyloid-ß (Aß) peptide. Thus, altered Aß degradation could be a major contributor to the development of LOAD. Variants in the gene encoding the Aß-degrading enzyme, angiotensin-1 converting enzyme (ACE) therefore represent plausible candidates for association with LOAD pathology and risk. Following Alzgene meta-analyses of all published case-control studies, the ACE variants rs4291 and rs1800764 showed significant association with LOAD risk. Furthermore ACE haplotypes are associated with both plasma ACE levels and LOAD risk. We tested three ACE variants (rs4291, rs4343, and rs1800764) for association with LOAD in ten Caucasian case-control populations (n = 8,212). No association was found using multiple logistic models (all p > 0.09). We found no population heterogeneity (all p > 0.38) or evidence for association with LOAD risk following meta-analysis of the ten populations for rs4343 (OR = 1.00), rs4291 (OR = 0.97), or rs1800764 (OR = 0.99). Although we found no haplotypic association in our complete dataset (p = 0.51), a significant global haplotypic p-value was observed in one population (p = 0.007) due to an association of the H3 haplotype (OR = 0.72, p = 0.02) and a trend towards an association of H4 (OR = 1.38, p = 0.09) and H7 (OR = 2.07, p = 0.08) although these did not survive Bonferroni correction. Previously reported associations of ACE variants with LOAD will be diminished following this study. At best, ACE variants have modest effect sizes, which are likely part of a complex interaction between genetic, phenotypic and pharmacological effects that would be undetected in traditional case-control studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the double-detonation scenario for Type Ia supernovae, it is suggested that a detonation initiates in a shell of helium-rich material accreted from a companion star by a sub-Chandrasekhar-mass white dwarf. This shell detonation drives a shock front into the carbon-oxygen white dwarf that triggers a secondary detonation in the core. The core detonation results in a complete disruption of the white dwarf. Earlier studies concluded that this scenario has difficulties in accounting for the observed properties of Type Ia supernovae since the explosion ejecta are surrounded by the products of explosive helium burning in the shell. Recently, however, it was proposed that detonations might be possible for much less massive helium shells than previously assumed (Bildsten et al.). Moreover, it was shown that even detonations of these minimum helium shell masses robustly trigger detonations of the carbon-oxygen core (Fink et al.). Therefore, it is possible that the impact of the helium layer on observables is less than previously thought. Here, we present time-dependent multi-wavelength radiative transfer calculations for models with minimum helium shell mass and derive synthetic observables for both the optical and ? -ray spectral regions. These differ strongly from those found in earlier simulations of sub-Chandrasekhar-mass explosions in which more massive helium shells were considered. Our models predict light curves that cover both the range of brightnesses and the rise and decline times of observed Type Ia supernovae. However, their colors and spectra do not match the observations. In particular, their B - V colors are generally too red. We show that this discrepancy is mainly due to the composition of the burning products of the helium shell of the Fink et al. models which contain significant amounts of titanium and chromium. Using a toy model, we also show that the burning products of the helium shell depend crucially on its initial composition. This leads us to conclude that good agreement between sub-Chandrasekhar-mass explosions and observed Type Ia supernovae may still be feasible but further study of the shell properties is required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data flow techniques have been around since the early '70s when they were used in compilers for sequential languages. Shortly after their introduction they were also consideredas a possible model for parallel computing, although the impact here was limited. Recently, however, data flow has been identified as a candidate for efficient implementation of various programming models on multi-core architectures. In most cases, however, the burden of determining data flow "macro" instructions is left to the programmer, while the compiler/run time system manages only the efficient scheduling of these instructions. We discuss a structured parallel programming approach supporting automatic compilation of programs to macro data flow and we show experimental results demonstrating the feasibility of the approach and the efficiency of the resulting "object" code on different classes of state-of-the-art multi-core architectures. The experimental results use different base mechanisms to implement the macro data flow run time support, from plain pthreads with condition variables to more modern and effective lock- and fence-free parallel frameworks. Experimental results comparing efficiency of the proposed approach with those achieved using other, more classical, parallel frameworks are also presented. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On multiprocessors with explicitly managed memory hierarchies (EMM), software has the responsibility of moving data in and out of fast local memories. This task can be complex and error-prone even for expert programmers. Before we can allow compilers to handle the complexity for us, we must identify the abstractions that are general enough to allow us to write applications with reasonable effort, yet speci?c enough to exploit the vast on-chip memory bandwidth of EMM multi-processors. To this end, we compare two programming models against hand-tuned codes on the STI Cell, paying attention to programmability and performance. The ?rst programming model, Sequoia, abstracts the memory hierarchy as private address spaces, each corresponding to a parallel task. The second, Cellgen, is a new framework which provides OpenMP-like semantics and the abstraction of a shared address spaces divided into private and shared data. We compare three applications programmed using these models against their hand-optimized counterparts in terms of abstractions, programming complexity, and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrating evidence from multiple domains is useful in prioritizing disease candidate genes for subsequent testing. We ranked all known human genes (n = 3819) under linkage peaks in the Irish Study of High-Density Schizophrenia Families using three different evidence domains: 1) a meta-analysis of microarray gene expression results using the Stanley Brain collection, 2) a schizophrenia protein-protein interaction network, and 3) a systematic literature search. Each gene was assigned a domain-specific p-value and ranked after evaluating the evidence within each domain. For comparison to this
ranking process, a large-scale candidate gene hypothesis was also tested by including genes with Gene Ontology terms related to neurodevelopment. Subsequently, genotypes of 3725 SNPs in 167 genes from a custom Illumina iSelect array were used to evaluate the top ranked vs. hypothesis selected genes. Seventy-three genes were both highly ranked and involved in neurodevelopment (category 1) while 42 and 52 genes were exclusive to neurodevelopment (category 2) or highly ranked (category 3), respectively. The most significant associations were observed in genes PRKG1, PRKCE, and CNTN4 but no individual SNPs were significant after correction for multiple testing. Comparison of the approaches showed an excess of significant tests using the hypothesis-driven neurodevelopment category. Random selection of similar sized genes from two independent genome-wide association studies (GWAS) of schizophrenia showed the excess was unlikely by chance. In a further meta-analysis of three GWAS datasets, four candidate SNPs reached nominal significance. Although gene ranking using integrated sources of prior information did not enrich for significant results in the current experiment, gene selection using an a priori hypothesis (neurodevelopment) was superior to random selection. As such, further development of gene ranking strategies using more carefully selected sources of information is warranted.