91 resultados para State-based Specifications
Resumo:
While state-of-the-art models of Earth's climate system have improved tremendously over the last 20 years, nontrivial structural flaws still hinder their ability to forecast the decadal dynamics of the Earth system realistically. Contrasting the skill of these models not only with each other but also with empirical models can reveal the space and time scales on which simulation models exploit their physical basis effectively and quantify their ability to add information to operational forecasts. The skill of decadal probabilistic hindcasts for annual global-mean and regional-mean temperatures from the EU Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) project is contrasted with several empirical models. Both the ENSEMBLES models and a “dynamic climatology” empirical model show probabilistic skill above that of a static climatology for global-mean temperature. The dynamic climatology model, however, often outperforms the ENSEMBLES models. The fact that empirical models display skill similar to that of today's state-of-the-art simulation models suggests that empirical forecasts can improve decadal forecasts for climate services, just as in weather, medium-range, and seasonal forecasting. It is suggested that the direct comparison of simulation models with empirical models becomes a regular component of large model forecast evaluations. Doing so would clarify the extent to which state-of-the-art simulation models provide information beyond that available from simpler empirical models and clarify current limitations in using simulation forecasting for decision support. Ultimately, the skill of simulation models based on physical principles is expected to surpass that of empirical models in a changing climate; their direct comparison provides information on progress toward that goal, which is not available in model–model intercomparisons.
Resumo:
The study of the mechanical energy budget of the oceans using Lorenz available potential energy (APE) theory is based on knowledge of the adiabatically re-arranged Lorenz reference state of minimum potential energy. The compressible and nonlinear character of the equation of state for seawater has been thought to cause the reference state to be ill-defined, casting doubt on the usefulness of APE theory for investigating ocean energetics under realistic conditions. Using a method based on the volume frequency distribution of parcels as a function of temperature and salinity in the context of the seawater Boussinesq approximation, which we illustrate using climatological data, we show that compressibility effects are in fact minor. The reference state can be regarded as a well defined one-dimensional function of depth, which forms a surface in temperature, salinity and density space between the surface and the bottom of the ocean. For a very small proportion of water masses, this surface can be multivalued and water parcels can have up to two statically stable levels in the reference density profile, of which the shallowest is energetically more accessible. Classifying parcels from the surface to the bottom gives a different reference density profile than classifying in the opposite direction. However, this difference is negligible. We show that the reference state obtained by standard sorting methods is equivalent, though computationally more expensive, to the volume frequency distribution approach. The approach we present can be applied systematically and in a computationally efficient manner to investigate the APE budget of the ocean circulation using models or climatological data.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
Background: P300 and steady-state visual evoked potential(SSVEP) approaches have been widely used for brain–computer interface (BCI) systems. However, neither of these approaches can work for all subjects. Some groups have reported that a hybrid BCI that combines two or more approaches might provide BCI functionality to more users. Hybrid P300/SSVEP BCIs have only recently been developed and validated, and very few avenues to improve performance have been explored. New method: The present study compares an established hybrid P300/SSVEP BCIs paradigm to a new paradigm in which shape changing, instead of color changing, is adopted for P300 evocation to decrease the degradation on SSVEP strength. Result: The result shows that the new hybrid paradigm presented in this paper yields much better performance than the normal hybrid paradigm. Comparison with existing method: A performance increase of nearly 20% in SSVEP classification is achieved using the new hybrid paradigm in comparison with the normal hybrid paradigm.Allthe paradigms except the normal hybrid paradigm used in this paper obtain 100% accuracy in P300 classification. Conclusions: The new hybrid P300/SSVEP BCIs paradigm in which shape changing, instead of color changing, could obtain as high classification accuracy of SSVEP as the traditional SSVEP paradigm and could obtain as high classification accuracy of P300 as the traditional P300 paradigm. P300 did not interfere with the SSVEP response using the new hybrid paradigm presented in this paper, which was superior to the normal hybrid P300/SSVEP paradigm.
Resumo:
We report a straightforward methodology for the fabrication of high-temperature thermoelectric (TE) modules using commercially available solder alloys and metal barriers. This methodology employs standard and accessible facilities that are simple to implement in any laboratory. A TE module formed by nine n-type Yb x Co4Sb12 and p-type Ce x Fe3CoSb12 state-of-the-art skutterudite material couples was fabricated. The physical properties of the synthesized skutterudites were determined, and the module power output, internal resistance, and thermocycling stability were evaluated in air. At a temperature difference of 365 K, the module provides more than 1.5 W cm−3 volume power density. However, thermocycling showed an increase of the internal module resistance and degradation in performance with the number of cycles when the device is operated at a hot-side temperature higher than 573 K. This may be attributed to oxidation of the skutterudite thermoelements.
Resumo:
In many lower-income countries, the establishment of marine protected areas (MPAs) involves significant opportunity costs for artisanal fishers, reflected in changes in how they allocate their labor in response to the MPA. The resource economics literature rarely addresses such labor allocation decisions of artisanal fishers and how, in turn, these contribute to the impact of MPAs on fish stocks, yield, and income. This paper develops a spatial bio-economic model of a fishery adjacent to a village of people who allocate their labor between fishing and on-shore wage opportunities to establish a spatial Nash equilibrium at a steady state fish stock in response to various locations for no-take zone MPAs and managed access MPAs. Villagers’ fishing location decisions are based on distance costs, fishing returns, and wages. Here, the MPA location determines its impact on fish stocks, fish yield, and villager income due to distance costs, congestion, and fish dispersal. Incorporating wage labor opportunities into the framework allows examination of the MPA’s impact on rural incomes, with results determining that win-wins between yield and stocks occur in very different MPA locations than do win-wins between income and stocks. Similarly, villagers in a high-wage setting face a lower burden from MPAs than do those in low-wage settings. Motivated by issues of central importance in Tanzania and Costa Rica, we impose various policies on this fishery – location specific no-take zones, increasing on-shore wages, and restricting MPA access to a subset of villagers – to analyze the impact of an MPA on fish stocks and rural incomes in such settings.
Resumo:
BACKGROUND: The cannabinoid cannabinoid type 1 (CB1) neutral antagonist tetrahydrocannabivarin (THCv) has been suggested as a possible treatment for obesity, but without the depressogenic side-effects of inverse antagonists such as Rimonabant. However, how THCv might affect the resting state functional connectivity of the human brain is as yet unknown. METHOD: We examined the effects of a single 10mg oral dose of THCv and placebo in 20 healthy volunteers in a randomized, within-subject, double-blind design. Using resting state functional magnetic resonance imaging and seed-based connectivity analyses, we selected the amygdala, insula, orbitofrontal cortex, and dorsal medial prefrontal cortex (dmPFC) as regions of interest. Mood and subjective experience were also measured before and after drug administration using self-report scales. RESULTS: Our results revealed, as expected, no significant differences in the subjective experience with a single dose of THCv. However, we found reduced resting state functional connectivity between the amygdala seed region and the default mode network and increased resting state functional connectivity between the amygdala seed region and the dorsal anterior cingulate cortex and between the dmPFC seed region and the inferior frontal gyrus/medial frontal gyrus. We also found a positive correlation under placebo for the amygdala-precuneus connectivity with the body mass index, although this correlation was not apparent under THCv. CONCLUSION: Our findings are the first to show that treatment with the CB1 neutral antagonist THCv decreases resting state functional connectivity in the default mode network and increases connectivity in the cognitive control network and dorsal visual stream network. This effect profile suggests possible therapeutic activity of THCv for obesity, where functional connectivity has been found to be altered in these regions.
Resumo:
The frontal pole corresponds to Brodmann area (BA) 10, the largest single architectonic area in the human frontal lobe. Generally, BA10 is thought to contain two or three subregions that subserve broad functions such as multitasking, social cognition, attention, and episodic memory. However, there is a substantial debate about the functional and structural heterogeneity of this large frontal region. Previous connectivity-based parcellation studies have identified two or three subregions in the human frontal pole. Here, we used diffusion tensor imaging to assess structural connectivity of BA10 in 35 healthy subjects and delineated subregions based on this connectivity. This allowed us to determine the correspondence of structurally based subregions with the scheme previously defined functionally. Three subregions could be defined in each subject. However, these three subregions were not spatially consistent between subjects. Therefore, we accepted a solution with two subregions that encompassed the lateral and medial frontal pole. We then examined resting-state functional connectivity of the two subregions and found significant differences between their connectivities. The medial cluster was connected to nodes of the default-mode network, which is implicated in internally focused, self-related thought, and social cognition. The lateral cluster was connected to nodes of the executive control network, associated with directed attention and working memory. These findings support the concept that there are two major anatomical subregions of the frontal pole related to differences in functional connectivity.
Resumo:
Biaxially oriented films produced from semi-crystalline, semi-aromatic polyesters are utilised extensively as components within various applications, including the specialist packaging, flexible electronic and photovoltaic markets. However, the thermal performance of such polyesters, specifically poly(ethylene terephthalate) (PET) and poly(ethylene-2,6-naphthalate) (PEN), is inadequate for several applications that require greater dimensional stability at higher operating temperatures. The work described in this project is therefore primarily focussed upon the copolymerisation of rigid comonomers with PET and PEN, in order to produce novel polyester-based materials that exhibit superior thermomechanical performance, with retention of crystallinity, to achieve biaxial orientation. Rigid biphenyldiimide comonomers were readily incorporated into PEN and poly(butylene-2,6-naphthalate) (PBN) via a melt-polycondensation route. For each copoly(ester-imide) series, retention of semi-crystalline behaviour is observed throughout entire copolymer composition ratios. This phenomenon may be rationalised by cocrystallisation between isomorphic biphenyldiimide and naphthalenedicarboxylate residues, which enables statistically random copolymers to melt-crystallise despite high proportions of imide sub-units being present. In terms of thermal performance, the glass transition temperature, Tg, linearly increases with imide comonomer content for both series. This facilitated the production of several high performance PEN-based biaxially oriented films, which displayed analogous drawing, barrier and optical properties to PEN. Selected PBN copoly(ester-imide)s also possess the ability to either melt-crystallise, or form a mesophase from the isotropic state depending on the applied cooling rate. An equivalent synthetic approach based upon isomorphic comonomer crystallisation was subsequently applied to PET by copolymerisation with rigid diimide and Kevlar®-type amide comonomers, to afford several novel high performance PET-based copoly(ester-imide)s and copoly(ester-amide)s that all exhibited increased Tgs. Retention of crystallinity was achieved in these copolymers by either melt-crystallisation or thermal annealing. The initial production of a semi-crystalline, PET-based biaxially oriented film with a Tg in excess of 100 °C was successful, and this material has obvious scope for further industrial scale-up and process development.
An LDA and probability-based classifier for the diagnosis of Alzheimer's Disease from structural MRI
Resumo:
In this paper a custom classification algorithm based on linear discriminant analysis and probability-based weights is implemented and applied to the hippocampus measurements of structural magnetic resonance images from healthy subjects and Alzheimer’s Disease sufferers; and then attempts to diagnose them as accurately as possible. The classifier works by classifying each measurement of a hippocampal volume as healthy controlsized or Alzheimer’s Disease-sized, these new features are then weighted and used to classify the subject as a healthy control or suffering from Alzheimer’s Disease. The preliminary results obtained reach an accuracy of 85.8% and this is a similar accuracy to state-of-the-art methods such as a Naive Bayes classifier and a Support Vector Machine. An advantage of the method proposed in this paper over the aforementioned state of the art classifiers is the descriptive ability of the classifications it produces. The descriptive model can be of great help to aid a doctor in the diagnosis of Alzheimer’s Disease, or even further the understand of how Alzheimer’s Disease affects the hippocampus.
Resumo:
By-products streams from a sunflower-based biodiesel plant were utilised for the production of fermentation media that can be used for the production of polyhydroxyalkanoates (PHA). Sunflower meal was utilised as substrate for the production of crude enzyme consortia through solid state fermentation (SSF) with the fungal strain Aspergillus oryzae. Fermented solids were subsequently mixed with unprocessed sunflower meal aiming at the production of a nutrient-rich fermentation feedstock. The highest free amino nitrogen (FAN) and inorganic phosphorus concentrations achieved were 1.5 g L-1 and 246 mg L-1, respectively, when an initial proteolytic activity of 6.4 U mL-1 was used. The FANconcentrationwas increased to 2.3 g L-1 when the initial proteolytic activity was increased to 16 U mL-1. Sunflower meal hydrolysates were mixed with crude glycerol to provide fermentationmedia that were evaluated for the production of poly(3-hydroxybutyrateco- 3-hydroxyvalerate) (P(3HB-co-3HV)) using Cupriavidus necator DSM545. The P(3HB-co-3HV) (9.9 g l-1) produced contained 3HB and 3HV units with 97 and 3 mol %, respectively. Integrating PHA production in existing 1st generation biodiesel production plants through valorisation of by-product streams could improve their sustainability.
Resumo:
We study the effect of bank loans on Chinese publicly listed firms' investment decisions based on the underinvestment and overinvestment theories of leverage. Evidence from China is of particular importance because China is the world's largest emerging and transitional economy. At first we show that there is a negative relationship between bank loan ratios and investment for Chinese publicly listed firms. And this negative relationship is much stronger for firms with low growth than firms with high growth. Secondly, we find that both short-term and long-term loan ratios are negatively correlated with investment. However, the higher the long-term loan ratios are, the weaker the negative relationship between long-term loan ratios and investment is. Thirdly, firm ownership only matters to the effect of short-term bank loans on investment in our sample. That is, the negative relationship between short-term loan ratios and investment is weaker for SOEs than for non-SOEs. Lastly, we show that the reform of China's banking system in 2003 has not strengthened the negative relationship between bank loans and investment. Our findings suggest that although Chinese state-owned banks are severely intervened by government policies, they still have a disciplining role on firms' investment, especially in firms with low growth.
Resumo:
This special issue is a testament to the recent burgeoning interest by theoretical linguists, language acquisitionists and teaching practitioners in the neuroscience of language. It offers a highly valuable, state-of-the-art overview of the neurophysiological methods that are currently being applied to questions in the field of second language (L2) acquisition, teaching and processing. Research in the area of neurolinguistics has developed dramatically in the past twenty years, providing a wealth of exciting findings, many of which are discussed in the papers in this volume. The goal of this commentary is twofold. The first is to critically assess the current state of neurolinguistic data from the point of view of language acquisition and processing—informed by the papers that comprise this special issue and the literature as a whole—pondering how the neuroscience of language/processing might inform us with respect to linguistic and language acquisition theories. The second goal is to offer some links from implications of exploring the first goal towards informing language teachers and the creation of linguistically and neurolinguistically-informed evidence-based pedagogies for non-native language teaching.
Resumo:
Despite the importance of dust aerosol in the Earth system, state-of-the-art models show a large variety for North African dust emission. This study presents a systematic evaluation of dust emitting-winds in 30 years of the historical model simulation with the UK Met Office Earth-system model HadGEM2-ES for the Coupled Model Intercomparison Project Phase 5. Isolating the effect of winds on dust emission and using an automated detection for nocturnal low-level jets (NLLJs) allow an in-depth evaluation of the model performance for dust emission from a meteorological perspective. The findings highlight that NLLJs are a key driver for dust emission in HadGEM2-ES in terms of occurrence frequency and strength. The annually and spatially averaged occurrence frequency of NLLJs is similar in HadGEM2-ES and ERA-Interim from the European Centre for Medium-Range Weather Forecasts. Compared to ERA-Interim, a stronger pressure ridge over northern Africa in winter and the southward displaced heat low in summer result in differences in location and strength of NLLJs. Particularly the larger geostrophic winds associated with the stronger ridge have a strengthening effect on NLLJs over parts of West Africa in winter. Stronger NLLJs in summer may rather result from an artificially increased mixing coefficient under stable stratification that is weaker in HadGEM2-ES. NLLJs in the Bodélé Depression are affected by stronger synoptic-scale pressure gradients in HadGEM2-ES. Wintertime geostrophic winds can even be so strong that the associated vertical wind shear prevents the formation of NLLJs. These results call for further model improvements in the synoptic-scale dynamics and the physical parametrization of the nocturnal stable boundary layer to better represent dust-emitting processes in the atmospheric model. The new approach could be used for identifying systematic behavior in other models with respect to meteorological processes for dust emission. This would help to improve dust emission simulations and contribute to decreasing the currently large uncertainty in climate change projections with respect to dust aerosol.
Resumo:
Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.