174 resultados para SimPly
Resumo:
This review discusses liquid crystal phase formation by biopolymers in solution. Lyotropic mesophases have been observed for several classes of biopolymer including DNA, peptides, polymer/peptide conjugates, glycopolymers and proteoglycans. Nematic or chiral nematic (cholesteric) phases are the most commonly observed mesophases, in which the rod-like fibrils have only orientational order. Hexagonal columnar phases are observed for several systems (DNA, PBLG, polymer/peptide hybrids) at higher concentration. Lamellar (smectic) phases are reported less often, although there are examples such as the layer arrangement of amylopectin side chains in starch. Possible explanations for the observed structures are discussed. The biological role of liquid crystal phases for several of these systems is outlined. Commonly, they may serve as a template to align fibrils for defined structural roles when the biopolymer is extruded and dried, for instance in the production of silk by spiders or silkworms, or of chitin in arthropod shells. In other cases, liquid crystal phase formation may occur in vivo simply as a consequence of high concentration, for instance the high packing density of DNA within cell nuclei.
Resumo:
Foods derived from animals are an important source of nutrients in the diet but there is considerable uncertainty about whether or not these foods contribute to increased risk of various chronic diseases. For milk in particular there appears to be an enormous mismatch between both the advice given on milk/dairy foods items by various authorities and public perceptions of harm from the consumption of milk and dairy products, and the evidence from long-term prospective cohort studies. Such studies provide convincing evidence that increased consumption of milk can lead to reductions in the risk of vascular disease and possibly some cancers and of an overall survival advantage from the consumption of milk, although the relative effect of milk products is unclear. Accordingly, simply reducing milk consumption in order to reduce saturated fatty acid (SFA) intake is not likely to produce benefits overall though the production of dairy products with reduced SFA contents is likely to be helpful. For red meat there is no evidence of increased risk of vascular diseases though processed meat appears to increase the risk substantially. There is still conflicting and inconsistent evidence on the relationship between consumption of red meat and the development of colorectal cancer, but this topic should not be ignored. Likewise, the role of poultry meat and its products as sources of dietary fat and fatty acids is not fully clear. There is concern about the likely increase in the prevalence of dementia but there are few data on the possible benefits or risks from milk and meat consumption. The future role of animal nutrition in creating foods closer to the optimum composition for long-term human health will be increasingly important. Overall, the case for increased milk consumption seems convincing, although the case for high-fat dairy products and red meat is not. Processed meat products do seem to have negative effects on long-term health and although more research is required, these effects do need to be put into the context of other risk factors to long-term health such as obesity, smoking and alcohol consumption.
Resumo:
Humanity requires healthy soil in order to flourish. Soil is central to food production, the regulation of greenhouse gases, recreational areas such as parks and sports fields and the creation of an environment pleasing to the eye. But soil is fragile and easily damaged by uninformed management or accidents. One type of damage is contamination by chemicals that provide the lifestyles to which the developed world has become accustomed. Traditional soil "clean-up" has entailed either simple disposal or isolation of contaminated soil. Clearly this is not sustainable. Modern remedial techniques apply mineralogical and geochemical knowledge to clean up contaminated soil and make it good for reuse, rather than simply discarding this precious and finite resource.
Resumo:
Increasingly, the microbiological scientific community is relying on molecular biology to define the complexity of the gut flora and to distinguish one organism from the next. This is particularly pertinent in the field of probiotics, and probiotic therapy, where identifying probiotics from the commensal flora is often warranted. Current techniques, including genetic fingerprinting, gene sequencing, oligonucleotide probes and specific primer selection, discriminate closely related bacteria with varying degrees of success. Additional molecular methods being employed to determine the constituents of complex microbiota in this area of research are community analysis, denaturing gradient gel electrophoresis (DGGE)/temperature gradient gel electrophoresis (TGGE), fluorescent in situ hybridisation (FISH) and probe grids. Certain approaches enable specific aetiological agents to be monitored, whereas others allow the effects of dietary intervention on bacterial populations to be studied. Other approaches demonstrate diversity, but may not always enable quantification of the population. At the heart of current molecular methods is sequence information gathered from culturable organisms. However, the diversity and novelty identified when applying these methods to the gut microflora demonstrates how little is known about this ecosystem. Of greater concern is the inherent bias associated with some molecular methods. As we understand more of the complexity and dynamics of this diverse microbiota we will be in a position to develop more robust molecular-based technologies to examine it. In addition to identification of the microbiota and discrimination of probiotic strains from commensal organisms, the future of molecular biology in the field of probiotics and the gut flora will, no doubt, stretch to investigations of functionality and activity of the microflora, and/or specific fractions. The quest will be to demonstrate the roles of probiotic strains in vivo and not simply their presence or absence.
Resumo:
Dense deployments of wireless local area networks (WLANs) are becoming a norm in many cities around the world. However, increased interference and traffic demands can severely limit the aggregate throughput achievable unless an effective channel assignment scheme is used. In this work, a simple and effective distributed channel assignment (DCA) scheme is proposed. It is shown that in order to maximise throughput, each access point (AP) simply chooses the channel with the minimum number of active neighbour nodes (i.e. nodes associated with neighbouring APs that have packets to send). However, application of such a scheme to practice depends critically on its ability to estimate the number of neighbour nodes in each channel, for which no practical estimator has been proposed before. In view of this, an extended Kalman filter (EKF) estimator and an estimate of the number of nodes by AP are proposed. These not only provide fast and accurate estimates but can also exploit channel switching information of neighbouring APs. Extensive packet level simulation results show that the proposed minimum neighbour and EKF estimator (MINEK) scheme is highly scalable and can provide significant throughput improvement over other channel assignment schemes.
Resumo:
This paper employs a state space system description to provide a pole placement scheme via state feedback. It is shown that when a recursive least squares estimation scheme is used, the feedback employed can be expressed simply in terms of the estimated system parameters. To complement the state feedback approach, a method employing both state feedback and linear output feedback is discussed. Both methods arc then compared with the previous output polynomial type feedback schemes.
Resumo:
I was born human. But this was an accident of fate - a condition merely of time and place. I believe it's something we have the power to change. I will tell you why. In August 1998, a silicon chip was implanted in my arm, allowing a computer to monitor me as I moved through the halls and offices of the Department of Cybernetics at the University of Reading, just west of London, where I've been a professor since 1988. My implant communicated via radio waves with a network of antennas throughout the department that in turn transmitted the signals to a computer programmed to respond to my actions. At the main entrance, a voice box operated by the computer said "Hello" when I entered; the computer detected my progress through the building, opening the door to my lab for me as I approached it and switching on the lights. For the nine days the implant was in place, I performed seemingly magical acts simply by walking in a particular direction. The aim of this experiment was to determine whether information could be transmitted to and from an implant. Not only did we succeed, but the trial demonstrated how the principles behind cybernetics could perform in real-life applications.
Resumo:
This paper presents several new families of cumulant-based linear equations with respect to the inverse filter coefficients for deconvolution (equalisation) and identification of nonminimum phase systems. Based on noncausal autoregressive (AR) modeling of the output signals and three theorems, these equations are derived for the cases of 2nd-, 3rd and 4th-order cumulants, respectively, and can be expressed as identical or similar forms. The algorithms constructed from these equations are simpler in form, but can offer more accurate results than the existing methods. Since the inverse filter coefficients are simply the solution of a set of linear equations, their uniqueness can normally be guaranteed. Simulations are presented for the cases of skewed series, unskewed continuous series and unskewed discrete series. The results of these simulations confirm the feasibility and efficiency of the algorithms.
Resumo:
A novel technique for micro-machining millimeter and submillimeter-wave rectangular waveguide components is reported. These are fabricated in two halves which simply snap together, utilizing locating pins and holes, and are physically robust, and cheap, and easy to manufacture. In addition, S-parameter measurements on these structures are reported for the first time and display lower loss than previously reported micro-machined rectangular waveguides.
Resumo:
In an adaptive equaliser, the time lag is an important parameter that significantly influences the performance. Only with the optimum time lag that corresponds to the best minimum-mean-square-error (MMSE) performance, can there be best use of the available resources. Many designs, however, choose the time lag either based on preassumption of the channel or simply based on average experience. The relation between the MMSE performance and the time lag is investigated using a new interpretation of the MMSE equaliser, and then a novel adaptive time lag algorithm is proposed based on gradient search. The proposed algorithm can converge to the optimum time lag in the mean and is verified by the numerical simulations provided.
Resumo:
The power of an adaptive equalizer is maximized when the structural parameters including the tap-length and decision delay can be optimally chosen. Although the method for adjusting either the tap-length or decision delay has been proposed, adjusting both simultaneously becomes much more involved as they interact with each other. In this paper, this problem is solved by putting a linear prewhitener before the equalizer, with which the equivalent channel becomes maximum-phase. This implies that the optimum decision delay can be simply ¯xed at the tap-length minus one, while the tap-length can then be chosen using a similar approach as that proposed in our previous work.
Resumo:
Many natural and technological applications generate time ordered sequences of networks, defined over a fixed set of nodes; for example time-stamped information about ‘who phoned who’ or ‘who came into contact with who’ arise naturally in studies of communication and the spread of disease. Concepts and algorithms for static networks do not immediately carry through to this dynamic setting. For example, suppose A and B interact in the morning, and then B and C interact in the afternoon. Information, or disease, may then pass from A to C, but not vice versa. This subtlety is lost if we simply summarize using the daily aggregate network given by the chain A-B-C. However, using a natural definition of a walk on an evolving network, we show that classic centrality measures from the static setting can be extended in a computationally convenient manner. In particular, communicability indices can be computed to summarize the ability of each node to broadcast and receive information. The computations involve basic operations in linear algebra, and the asymmetry caused by time’s arrow is captured naturally through the non-mutativity of matrix-matrix multiplication. Illustrative examples are given for both synthetic and real-world communication data sets. We also discuss the use of the new centrality measures for real-time monitoring and prediction.
Resumo:
This paper investigates the extent to which clients were able to influence performance measurement appraisals during the downturn in commercial property markets that began in the UK during the second half of 2007. The sharp change in market sentiment produced speculation that different client categories were attempting to influence their appraisers in different ways. In particular, it was recognised that the requirement for open-ended funds to meet redemptions gave them strong incentives to ensure that their asset values were marked down to market. Using data supplied by Investment Property Databank, we demonstrate that, indeed, unlisted open ended funds experienced sharper drops in capital values than other fund types in the second half of 2007, after the market turning point. These differences are statistically significant and cannot simply be explained by differences in portfolio composition. Client influence on appraisal forms one possible explanation of the results observed: the different pressures on fund managers resulting in different appraisal outcomes.
Resumo:
We use ellipsometry to investigate a transition in the morphology of a sphere-forming diblock copolymer thin-film system. At an interface the diblock morphology may differ from the bulk when the interfacial tension favours wetting of the minority domain, thereby inducing a sphere-to-lamella transition. In a small, favourable window in energetics, one may observe this transition simply by adjusting the temperature. Ellipsometry is ideally suited to the study of the transition because the additional interface created by the wetting layer affects the polarisation of light reflected from the sample. Here we study thin films of poly(butadiene-ethylene oxide) (PB-PEO), which order to form PEO minority spheres in a PB matrix. As temperature is varied, the reversible transition from a partially wetting layer of PEO spheres to a full wetting layer at the substrate is investigated.
Resumo:
This article proposes a new model for autoregressive conditional heteroscedasticity and kurtosis. Via a time-varying degrees of freedom parameter, the conditional variance and conditional kurtosis are permitted to evolve separately. The model uses only the standard Student’s t-density and consequently can be estimated simply using maximum likelihood. The method is applied to a set of four daily financial asset return series comprising U.S. and U.K. stocks and bonds, and significant evidence in favor of the presence of autoregressive conditional kurtosis is observed. Various extensions to the basic model are proposed, and we show that the response of kurtosis to good and bad news is not significantly asymmetric.