39 resultados para Low Autocorrelation Binary Sequence Problem
                                
Resumo:
We show that for any sample size, any size of the test, and any weights matrix outside a small class of exceptions, there exists a positive measure set of regression spaces such that the power of the Cli-Ord test vanishes as the autocorrelation increases in a spatial error model. This result extends to the tests that dene the Gaussian power envelope of all invariant tests for residual spatial autocorrelation. In most cases, the regression spaces such that the problem occurs depend on the size of the test, but there also exist regression spaces such that the power vanishes regardless of the size. A characterization of such particularly hostile regression spaces is provided.
                                
Resumo:
In late 2005, a number of German open ended funds suffered significant withdrawals by unit holders. The crisis was precipitated by a long term bear market in German property investment and the fact that these funds offered short term liquidity to unit holders but had low levels of liquidity in the fund. A more controversial suggestion was that the crisis was exacerbated by a perception that the valuations of the fund were too infrequent and inaccurate. As units are priced by reference to these valuations with no secondary market, the valuation process is central to the process. There is no direct evidence that these funds were over-valued but there is circumstantial evidence and this paper examines the indirect evidence of the process to see whether the hypothesis that valuation is an issue for the German funds holds any credibility. It also discusses whether there is a wider issue for other funds of this nature or whether it is a parochial problem confined to Germany. The conclusions are that there is reason to believe that German valuation processes make over-valuation in a recession more likely than in other countries and that more direct research into the German valuation system is required to identify the issues which need to be addressed to make the valuation system more trusted.
                                
Resumo:
The elucidation of the domain content of a given protein sequence in the absence of determined structure or significant sequence homology to known domains is an important problem in structural biology. Here we address how successfully the delineation of continuous domains can be accomplished in the absence of sequence homology using simple baseline methods, an existing prediction algorithm (Domain Guess by Size), and a newly developed method (DomSSEA). The study was undertaken with a view to measuring the usefulness of these prediction methods in terms of their application to fully automatic domain assignment. Thus, the sensitivity of each domain assignment method was measured by calculating the number of correctly assigned top scoring predictions. We have implemented a new continuous domain identification method using the alignment of predicted secondary structures of target sequences against observed secondary structures of chains with known domain boundaries as assigned by Class Architecture Topology Homology (CATH). Taking top predictions only, the success rate of the method in correctly assigning domain number to the representative chain set is 73.3%. The top prediction for domain number and location of domain boundaries was correct for 24% of the multidomain set (±20 residues). These results have been put into context in relation to the results obtained from the other prediction methods assessed
                                
Resumo:
Acrylamide, a chemical that is probably carcinogenic in humans and has neurological and reproductive effects, forms from free asparagine and reducing sugars during high-temperature cooking and processing of common foods. Potato and cereal products are major contributors to dietary exposure to acrylamide and while the food industry reacted rapidly to the discovery of acrylamide in some of the most popular foods, the issue remains a difficult one for many sectors. Efforts to reduce acrylamide formation would be greatly facilitated by the development of crop varieties with lower concentrations of free asparagine and/or reducing sugars, and of best agronomic practice to ensure that concentrations are kept as low as possible. This review describes how acrylamide is formed, the factors affecting free asparagine and sugar concentrations in crop plants, and the sometimes complex relationship between precursor concentration and acrylamide-forming potential. It covers some of the strategies being used to reduce free asparagine and sugar concentrations through genetic modification and other genetic techniques, such as the identification of quantitative trait loci. The link between acrylamide formation, flavour, and colour is discussed, as well as the difficulty of balancing the unknown risk of exposure to acrylamide in the levels that are present in foods with the well-established health benefits of some of the foods concerned. Key words: Amino acids, asparagine, cereals, crop quality, food safety, Maillard reaction, potato, rye, sugars, wheat.
                                
Resumo:
Objective: The objective of this study was to explore the relationship between low density lipoprotein (LDL) and dendritic cell (DC) activation, based upon the hypothesis that reactive oxygen species (ROS)-mediated modification of proteins that may be present in local DC microenvironments could be important as mediators of this activation. Although LDL are known to be oxidised in vivo, and taken up by macrophages during atherogenesis; their effect on DC has not been explored previously. Methods: Human DCs were prepared from peripheral blood monocytes using GM-CSF and IL-4. Plasma LDLs were isolated by sequential gradient centrifugation, oxidised in CuSO4, and oxidation arrested to yield mild, moderate and highly oxidised LDL forms. DCs exposed to these LDLs were investigated using combined phenotypic, functional (autologous T cell activation), morphological and viability assays. Results: Highly-oxidised LDL increased DC HLA-DR, CD40 and CD86 expression, corroborated by increased DC-induced T cell proliferation. Both native and oxidised LDL induced prominent DC clustering. However, high concentrations of highly-oxidised LDL inhibited DC function, due to increased DC apoptosis. Conclusions: This study supports the hypothesis that oxidised LDL are capable of triggering the transition from sentinel to messenger DC. Furthermore, the DC clustering–activation–apoptosis sequence in the presence of different LDL forms is consistent with a regulatory DC role in immunopathogenesis of atheroma. A sequence of initial accumulation of DC, increasing LDL oxidation, and DC-induced T cell activation, may explain why local breach of tolerance can occur. Above a threshold level, however, supervening DC apoptosis limits this, contributing instead to the central plaque core.
                                
Resumo:
When performing data fusion, one often measures where targets were and then wishes to deduce where targets currently are. There has been recent research on the processing of such out-of-sequence data. This research has culminated in the development of a number of algorithms for solving the associated tracking problem. This paper reviews these different approaches in a common Bayesian framework and proposes an architecture that orthogonalises the data association and out-of-sequence problems such that any combination of solutions to these two problems can be used together. The emphasis is not on advocating one approach over another on the basis of computational expense, but rather on understanding the relationships among the algorithms so that any approximations made are explicit. Results for a multi-sensor scenario involving out-of-sequence data association are used to illustrate the utility of this approach in a specific context.
                                
Resumo:
In data fusion systems, one often encounters measurements of past target locations and then wishes to deduce where the targets are currently located. Recent research on the processing of such out-of-sequence data has culminated in the development of a number of algorithms for solving the associated tracking problem. This paper reviews these different approaches in a common Bayesian framework and proposes an architecture that orthogonalises the data association and out-of-sequence problems such that any combination of solutions to these two problems can be used together. The emphasis is not on advocating one approach over another on the basis of computational expense, but rather on understanding the relationships between the algorithms so that any approximations made are explicit.
                                
Resumo:
The UK Government is committed to all new homes being zero-carbon from 2016. The use of low and zero carbon (LZC) technologies is recognised by housing developers as being a key part of the solution to deliver against this zero-carbon target. The paper takes as its starting point that the selection of new technologies by firms is not a phenomenon which takes place within a rigid sphere of technical rationality (for example, Rip and Kemp, 1998). Rather, technology forms and diffusion trajectories are driven and shaped by myriad socio-technical structures, interests and logics. A literature review is offered to contribute to a more critical and systemic foundation for understanding the socio-technical features of the selection of LZC technologies in new housing. The problem is investigated through a multidisciplinary lens consisting of two perspectives: technological and institutional. The synthesis of the perspectives crystallises the need to understand that the selection of LZC technologies by housing developers is not solely dependent on technical or economic efficiency, but on the emergent ‘fit’ between the intrinsic properties of the technologies, institutional logics and the interests and beliefs of various actors in the housing development process.
                                
Resumo:
The behavior of the ensemble Kalman filter (EnKF) is examined in the context of a model that exhibits a nonlinear chaotic (slow) vortical mode coupled to a linear (fast) gravity wave of a given amplitude and frequency. It is shown that accurate recovery of both modes is enhanced when covariances between fast and slow normal-mode variables (which reflect the slaving relations inherent in balanced dynamics) are modeled correctly. More ensemble members are needed to recover the fast, linear gravity wave than the slow, vortical motion. Although the EnKF tends to diverge in the analysis of the gravity wave, the filter divergence is stable and does not lead to a great loss of accuracy. Consequently, provided the ensemble is large enough and observations are made that reflect both time scales, the EnKF is able to recover both time scales more accurately than optimal interpolation (OI), which uses a static error covariance matrix. For OI it is also found to be problematic to observe the state at a frequency that is a subharmonic of the gravity wave frequency, a problem that is in part overcome by the EnKF.However, error in themodeled gravity wave parameters can be detrimental to the performance of the EnKF and remove its implied advantages, suggesting that a modified algorithm or a method for accounting for model error is needed.
                                
Resumo:
In order to address the growing urgency of issues around environmental and resource limits, there is a clear need to develop policies that promote changes in behavior and the ways in which society both views and consumes goods and services. However, there is an argument to suggest that, in order to develop effective policies in this area, we need to move beyond a narrow understanding of ‘how individuals behave’ in order to cultivate a more nuanced approach that encompasses behavioral influences in different societies, contexts and settings. In this opinion article we therefore draw on a range of our own recent comparative research studies in order to provide fresh insights into the continued problem of how to engage people individually and collectively in establishing more sustainable, low-carbon societies.
                                
Resumo:
We present optical and ultraviolet spectra, light curves, and Doppler tomograms of the low-mass X-ray binary EXO 0748-676. Using an extensive set of 15 emission-line tomograms, we show that, along with the usual emission from the stream and ``hot spot,'' there is extended nonaxisymmetric emission from the disk rim. Some of the emission and Hα and Hβ absorption features lend weight to the hypothesis that part of the stream overflows the disk rim and forms a two phase medium. The data are consistent with a 1.35 Msolar neutron star with a main-sequence companion and hence a mass ratio q~0.34.
                                
Resumo:
Medium range flood forecasting activities, driven by various meteorological forecasts ranging from high resolution deterministic forecasts to low spatial resolution ensemble prediction systems, share a major challenge in the appropriateness and design of performance measures. In this paper possible limitations of some traditional hydrological and meteorological prediction quality and verification measures are identified. Some simple modifications are applied in order to circumvent the problem of the autocorrelation dominating river discharge time-series and in order to create a benchmark model enabling the decision makers to evaluate the forecast quality and the model quality. Although the performance period is quite short the advantage of a simple cost-loss function as a measure of forecast quality can be demonstrated.
                                
Resumo:
We present a mathematical model describing the inward solidification of a slab, a circular cylinder and a sphere of binary melt kept below its equilibrium freezing temperature. The thermal and physical properties of the melt and solid are assumed to be identical. An asymptotic method, valid in the limit of large Stefan number is used to decompose the moving boundary problem for a pure substance into a hierarchy of fixed-domain diffusion problems. Approximate, analytical solutions are derived for the inward solidification of a slab and a sphere of a binary melt which are compared with numerical solutions of the unapproximated system. The solutions are found to agree within the appropriate asymptotic regime of large Stefan number and small time. Numerical solutions are used to demonstrate the dependence of the solidification process upon the level of impurity and other parameters. We conclude with a discussion of the solutions obtained, their stability and possible extensions and refinements of our study.
                                
Resumo:
Objective: Many diseases, including atherosclerosis, involve chronic inflammation. The master transcription factor for inflammation is NF-κB. Inflammatory sites have a low extracellular pH. Our objective was to demonstrate the effect of pH on NF-κB activation and cytokine secretion. Methods: Mouse J774 macrophages or human THP-1 or monocyte-derived macrophages were incubated at pH 7.0–7.4 and inflammatory cytokine secretion and NF-κB activity were measured. Results: A pH of 7.0 greatly decreased pro-inflammatory cytokine secretion (TNF or IL-6) by J774 macrophages, but not THP-1 or human monocyte-derived macrophages. Upon stimulation of mouse macrophages, the levels of IκBα, which inhibits NF-κB, fell but low pH prevented its later increase, which normally restores the baseline activity of NF-κB, even though the levels of mRNA for IκBα were increased. pH 7.0 greatly increased and prolonged NF-κB binding to its consensus promoter sequence, especially the anti-inflammatory p50:p50 homodimers. Human p50 was overexpressed using adenovirus in THP-1 macrophages and monocyte-derived macrophages to see if it would confer pH sensitivity to NF-κB activity in human cells. Overexpression of p50 increased p50:p50 DNA-binding and in THP-1 macrophages inhibited considerably TNF and IL-6 secretion, but there was still no effect of pH on p50:p50 DNA binding or cytokine secretion. Conclusion: A modest decrease in pH can sometimes have marked effects on NF-κB activation and cytokine secretion and might be one reason to explain why mice normally develop less atherosclerosis than do humans.
                                
Resumo:
Low self-esteem is a common, disabling, and distressing problem that has been shown to be involved in the etiology and maintenance of range of Axis I disorders. Hence, it is a priority to develop effective treatments for low self-esteem. A cognitive-behavioral conceptualization of low self-esteem has been proposed and a cognitive-behavioral treatment (CBT) program described (Fennell, 1997, 1999). As yet there has been no systematic evaluation of this treatment with routine clinical populations. The current case report describes the assessment, formulation, and treatment of a patient with low self-esteem, depression, and anxiety symptoms. At the end of treatment (12 sessions over 6 months), and at 1-year follow-up, the treatment showed large effect sizes on measures of depression, anxiety, and self-esteem. The patient no longer met diagnostic criteria for any psychiatric disorder, and showed reliable and clinically significant change on all measures. As far as we are aware, there are no other published case studies of CBT for low self-esteem that report pre- and posttreatment evaluations, or follow-up data. Hence, this case provides an initial contribution to the evidence base for the efficacy of CBT for low self-esteem. However, further research is needed to confirm the efficacy of CBT for low self-esteem and to compare its efficacy and effectiveness to alternative treatments, including diagnosis-specific CBT protocols.
 
                    