961 resultados para Isotropic and Anisotropic models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper the exchange rate forecasting performance of neural network models are evaluated against random walk and a range of time series models. There are no guidelines available that can be used to choose the parameters of neural network models and therefore the parameters are chosen according to what the researcher considers to be the best. Such an approach, however, implies that the risk of making bad decisions is extremely high which could explain why in many studies neural network models do not consistently perform better than their time series counterparts. In this paper through extensive experimentation the level of subjectivity in building neural network models is considerably reduced and therefore giving them a better chance of performing well. Our results show that in general neural network models perform better than traditionally used time series models in forecasting exchange rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ernst Mach observed that light or dark bands could be seen at abrupt changes of luminance gradient in the absence of peaks or troughs in luminance. Many models of feature detection share the idea that bars, lines, and Mach bands are found at peaks and troughs in the output of even-symmetric spatial filters. Our experiments assessed the appearance of Mach bands (position and width) and the probability of seeing them on a novel set of generalized Gaussian edges. Mach band probability was mainly determined by the shape of the luminance profile and increased with the sharpness of its corners, controlled by a single parameter (n). Doubling or halving the size of the images had no significant effect. Variations in contrast (20%-80%) and duration (50-300 ms) had relatively minor effects. These results rule out the idea that Mach bands depend simply on the amplitude of the second derivative, but a multiscale model, based on Gaussian-smoothed first- and second-derivative filtering, can account accurately for the probability and perceived spatial layout of the bands. A key idea is that Mach band visibility depends on the ratio of second- to first-derivative responses at peaks in the second-derivative scale-space map. This ratio is approximately scale-invariant and increases with the sharpness of the corners of the luminance ramp, as observed. The edges of Mach bands pose a surprisingly difficult challenge for models of edge detection, but a nonlinear third-derivative operation is shown to predict the locations of Mach band edges strikingly well. Mach bands thus shed new light on the role of multiscale filtering systems in feature coding. © 2012 ARVO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article reflects on the UK coalition government’s ‘alternative models’ agenda, specifically in terms of the adoption of new models of service delivery by arm’s-length bodies (ALBs). It provides an overview of the alternative models agenda and discusses barriers to implementation. These include practical challenges involved in the set up of alternative models, the role of sponsor departments, and the effective communication of best practice. Finally, the article highlights some issues for further discussion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phosphorylation processes are common post-transductional mechanisms, by which it is possible to modulate a number of metabolic pathways. Proteins are highly sensitive to phosphorylation, which governs many protein-protein interactions. The enzymatic activity of some protein tyrosine-kinases is under tyrosine-phosphorylation control, as well as several transmembrane anion-fluxes and cation exchanges. In addition, phosphorylation reactions are involved in intra and extra-cellular 'cross-talk' processes. Early studies adopted laboratory animals to study these little known phosphorylation processes. The main difficulty encountered with these animal techniques was obtaining sufficient kinase or phosphatase activity suitable for studying the enzymatic process. Large amounts of biological material from organs, such as the liver and spleen were necessary to conduct such work with protein kinases. Subsequent studies revealed the ubiquity and complexity of phosphorylation processes and techniques evolved from early rat studies to the adaptation of more rewarding in vitro models. These involved human erythrocytes, which are a convenient source both for the enzymes, we investigated and for their substrates. This preliminary work facilitated the development of more advanced phosphorylative models that are based on cell lines. © 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62H12, 62P99

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary 47A20, 47A45; Secondary 47A48.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A pre-test, post-test, quasi-experimental design was used to examine the effects of student-centered and traditional models of reading instruction on outcomes of literal comprehension and critical thinking skills. The sample for this study consisted of 101 adult students enrolled in a high-level developmental reading course at a large, urban community college in the Southeastern United States. The experimental group consisted of 48 students, and the control group consisted of 53 students. Students in the experimental group were limited in the time spent reading a course text of basic skills, with instructors using supplemental materials such as poems, news articles, and novels. Discussions, the reading-writing connection, and student choice in material selection were also part of the student-centered curriculum. Students in the control group relied heavily on a course text and vocabulary text for reading material, with great focus placed on basic skills. Activities consisted primarily of multiple-choice questioning and quizzes. The instrument used to collect pre-test data was Descriptive Tests of Language Skills in Reading Comprehension; post-test data were taken from the Florida College Basic Skills Exit Test. A MANCOVA was used as the statistical method to determine if either model of instruction led to significantly higher gains in literal comprehension skills or critical thinking skills. A paired samples t-test was also used to compare pre-test and post-test means. The results of the MANCOVA indicated no significant difference between instructional models on scores of literal comprehension and critical thinking. Neither was there any significant difference in scores between subgroups of age (under 25 and 25 and older) and language background (native English speaker and second-language learner). The results of the t-test indicated, however, that students taught under both instructional models made significant gains in on both literal comprehension and critical thinking skills from pre-test to post-test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sedimentary sections of three cores from the Celtic margin provide high-resolution records of the terrigenous fluxes during the last glacial cycle. A total of 21 14C AMS dates allow us to define age models with a resolution better than 100 yr during critical periods such as Heinrich events 1 and 2. Maximum sedimentary fluxes occurred at the Meriadzek Terrace site during the Last Glacial Maximum (LGM). Detailed X-ray imagery of core MD95-2002 from the Meriadzek Terrace shows no sedimentary structures suggestive of either deposition from high-density turbidity currents or significant erosion. Two paroxysmal terrigenous flux episodes have been identified. The first occurred after the deposition of Heinrich event 2 Canadian ice-rafted debris (IRD) and includes IRD from European sources. We suggest that the second represents an episode of deposition from turbid plumes, which precedes IRD deposition associated with Heinrich event 1. At the end of marine isotopic stage 2 (MIS 2) and the beginning of MIS 1 the highest fluxes are recorded on the Whittard Ridge where they correspond to deposition from turbidity current overflows. Canadian icebergs have rafted debris at the Celtic margin during Heinrich events 1, 2, 4 and 5. The high-resolution records of Heinrich events 1 and 2 show that in both cases the arrival of the Canadian icebergs was preceded by a European ice rafting precursor event, which took place about 1-1.5 kyr before. Two rafting episodes of European IRD also occurred immediately after Heinrich event 2 and just before Heinrich event 1. The terrigenous fluxes recorded in core MD95-2002 during the LGM are the highest reported at hemipelagic sites from the northwestern European margin. The magnitude of the Canadian IRD fluxes at Meriadzek Terrace is similar to those from oceanic sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Magnetic resonance imaging is a research and clinical tool that has been applied in a wide variety of sciences. One area of magnetic resonance imaging that has exhibited terrific promise and growth in the past decade is magnetic susceptibility imaging. Imaging tissue susceptibility provides insight into the microstructural organization and chemical properties of biological tissues, but this image contrast is not well understood. The purpose of this work is to develop effective approaches to image, assess, and model the mechanisms that generate both isotropic and anisotropic magnetic susceptibility contrast in biological tissues, including myocardium and central nervous system white matter.

This document contains the first report of MRI-measured susceptibility anisotropy in myocardium. Intact mouse heart specimens were scanned using MRI at 9.4 T to ascertain both the magnetic susceptibility and myofiber orientation of the tissue. The susceptibility anisotropy of myocardium was observed and measured by relating the apparent tissue susceptibility as a function of the myofiber angle with respect to the applied magnetic field. A multi-filament model of myocardial tissue revealed that the diamagnetically anisotropy α-helix peptide bonds in myofilament proteins are capable of producing bulk susceptibility anisotropy on a scale measurable by MRI, and are potentially the chief sources of the experimentally observed anisotropy.

The growing use of paramagnetic contrast agents in magnetic susceptibility imaging motivated a series of investigations regarding the effect of these exogenous agents on susceptibility imaging in the brain, heart, and kidney. In each of these organs, gadolinium increases susceptibility contrast and anisotropy, though the enhancements depend on the tissue type, compartmentalization of contrast agent, and complex multi-pool relaxation. In the brain, the introduction of paramagnetic contrast agents actually makes white matter tissue regions appear more diamagnetic relative to the reference susceptibility. Gadolinium-enhanced MRI yields tensor-valued susceptibility images with eigenvectors that more accurately reflect the underlying tissue orientation.

Despite the boost gadolinium provides, tensor-valued susceptibility image reconstruction is prone to image artifacts. A novel algorithm was developed to mitigate these artifacts by incorporating orientation-dependent tissue relaxation information into susceptibility tensor estimation. The technique was verified using a numerical phantom simulation, and improves susceptibility-based tractography in the brain, kidney, and heart. This work represents the first successful application of susceptibility-based tractography to a whole, intact heart.

The knowledge and tools developed throughout the course of this research were then applied to studying mouse models of Alzheimer’s disease in vivo, and studying hypertrophic human myocardium specimens ex vivo. Though a preliminary study using contrast-enhanced quantitative susceptibility mapping has revealed diamagnetic amyloid plaques associated with Alzheimer’s disease in the mouse brain ex vivo, non-contrast susceptibility imaging was unable to precisely identify these plaques in vivo. Susceptibility tensor imaging of human myocardium specimens at 9.4 T shows that susceptibility anisotropy is larger and mean susceptibility is more diamagnetic in hypertrophic tissue than in normal tissue. These findings support the hypothesis that myofilament proteins are a source of susceptibility contrast and anisotropy in myocardium. This collection of preclinical studies provides new tools and context for analyzing tissue structure, chemistry, and health in a variety of organs throughout the body.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nature is challenged to move charge efficiently over many length scales. From sub-nm to μm distances, electron-transfer proteins orchestrate energy conversion, storage, and release both inside and outside the cell. Uncovering the detailed mechanisms of biological electron-transfer reactions, which are often coupled to bond-breaking and bond-making events, is essential to designing durable, artificial energy conversion systems that mimic the specificity and efficiency of their natural counterparts. Here, we use theoretical modeling of long-distance charge hopping (Chapter 3), synthetic donor-bridge-acceptor molecules (Chapters 4, 5, and 6), and de novo protein design (Chapters 5 and 6) to investigate general principles that govern light-driven and electrochemically driven electron-transfer reactions in biology. We show that fast, μm-distance charge hopping along bacterial nanowires requires closely packed charge carriers with low reorganization energies (Chapter 3); singlet excited-state electronic polarization of supermolecular electron donors can attenuate intersystem crossing yields to lower-energy, oppositely polarized, donor triplet states (Chapter 4); the effective static dielectric constant of a small (~100 residue) de novo designed 4-helical protein bundle can change upon phototriggering an electron transfer event in the protein interior, providing a means to slow the charge-recombination reaction (Chapter 5); and a tightly-packed de novo designed 4-helix protein bundle can drastically alter charge-transfer driving forces of photo-induced amino acid radical formation in the bundle interior, effectively turning off a light-driven oxidation reaction that occurs in organic solvent (Chapter 6). This work leverages unique insights gleaned from proteins designed from scratch that bind synthetic donor-bridge-acceptor molecules that can also be studied in organic solvents, opening new avenues of exploration into the factors critical for protein control of charge flow in biology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models of the air-sea transfer velocity of gases may be either empirical or mechanistic. Extrapolations of empirical models to an unmeasured gas or to another water temperature can be erroneous if the basis of that extrapolation is flawed. This issue is readily demonstrated for the most well-known empirical gas transfer velocity models where the influence of bubble-mediated transfer, which can vary between gases, is not explicitly accounted for. Mechanistic models are hindered by an incomplete knowledge of the mechanisms of air-sea gas transfer. We describe a hybrid model that incorporates a simple mechanistic view—strictly enforcing a distinction between direct and bubble-mediated transfer—but also uses parameterizations based on data from eddy flux measurements of dimethyl sulphide (DMS) to calibrate the model together with dual tracer results to evaluate the model. This model underpins simple algorithms that can be easily applied within schemes to calculate local, regional, or global air-sea fluxes of gases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models of the air-sea transfer velocity of gases may be either empirical or mechanistic. Extrapolations of empirical models to an unmeasured gas or to another water temperature can be erroneous if the basis of that extrapolation is flawed. This issue is readily demonstrated for the most well-known empirical gas transfer velocity models where the influence of bubble-mediated transfer, which can vary between gases, is not explicitly accounted for. Mechanistic models are hindered by an incomplete knowledge of the mechanisms of air-sea gas transfer. We describe a hybrid model that incorporates a simple mechanistic view—strictly enforcing a distinction between direct and bubble-mediated transfer—but also uses parameterizations based on data from eddy flux measurements of dimethyl sulphide (DMS) to calibrate the model together with dual tracer results to evaluate the model. This model underpins simple algorithms that can be easily applied within schemes to calculate local, regional, or global air-sea fluxes of gases.