737 resultados para Soft computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biomaterials are often soft materials. There is now growing interest in designing, synthesizing and characterising soft materials that mimic the properties of biological materials such as tissue, proteins, DNA or cells. Research on biomimetic soft matter is therefore a developing theme with important emerging applications in biomedicine including tissue engineering, diagnostics, gene therapy, drug delivery and many others. There are also important basic science questions concerning the use of concepts from colloid and polymer science to understand the self-assembly of biomimetic soft materials. This issue of Soft Matter presents a selection of extremely topical articles on a diversity of biomimetic soft matter systems. I thank the contributors for this quite remarkable collection of papers, which report many fascinating discoveries and insights.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we derive novel approximations to trapped waves in a two-dimensional acoustic waveguide whose walls vary slowly along the guide, and at which either Dirichlet (sound-soft) or Neumann (sound-hard) conditions are imposed. The guide contains a single smoothly bulging region of arbitrary amplitude, but is otherwise straight, and the modes are trapped within this localised increase in width. Using a similar approach to that in Rienstra (2003), a WKBJ-type expansion yields an approximate expression for the modes which can be present, which display either propagating or evanescent behaviour; matched asymptotic expansions are then used to derive connection formulae which bridge the gap across the cut-off between propagating and evanescent solutions in a tapering waveguide. A uniform expansion is then determined, and it is shown that appropriate zeros of this expansion correspond to trapped mode wavenumbers; the trapped modes themselves are then approximated by the uniform expansion. Numerical results determined via a standard iterative method are then compared to results of the full linear problem calculated using a spectral method, and the two are shown to be in excellent agreement, even when $\epsilon$, the parameter characterising the slow variations of the guide’s walls, is relatively large.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Markowitz showed that assets can be combined to produce an 'Efficient' portfolio that will give the highest level of portfolio return for any level of portfolio risk, as measured by the variance or standard deviation. These portfolios can then be connected to generate what is termed an 'Efficient Frontier' (EF). In this paper we discuss the calculation of the Efficient Frontier for combinations of assets, again using the spreadsheet Optimiser. To illustrate the derivation of the Efficient Frontier, we use the data from the Investment Property Databank Long Term Index of Investment Returns for the period 1971 to 1993. Many investors might require a certain specific level of holding or a restriction on holdings in at least some of the assets. Such additional constraints may be readily incorporated into the model to generate a constrained EF with upper and/or lower bounds. This can then be compared with the unconstrained EF to see whether the reduction in return is acceptable. To see the effect that these additional constraints may have, we adopt a fairly typical pension fund profile, with no more than 20% of the total held in Property. The paper shows that it is now relatively easy to use the Optimiser available in at least one spreadsheet (EXCEL) to calculate efficient portfolios for various levels of risk and return, both constrained and unconstrained, so as to be able to generate any number of Efficient Frontiers.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The removal of the most long-lived radiotoxic elements from used nuclear fuel, minor actinides, is foreseen as an essential step toward increasing the public acceptance of nuclear energy as a key component of a low-carbon energy future. Once removed from the remaining used fuel, these elements can be used as fuel in their own right in fast reactors or converted into shorter-lived or stable elements by transmutation prior to geological disposal. The SANEX process is proposed to carry out this selective separation by solvent extraction. Recent efforts to develop reagents capable of separating the radioactive minor actinides from lanthanides as part of a future strategy for the management and reprocessing of used nuclear fuel are reviewed. The current strategies for the reprocessing of PUREX raffinate are summarized, and some guiding principles for the design of actinide-selective reagents are defined. The development and testing of different classes of solvent extraction reagent are then summarized, covering some of the earliest ligand designs right through to the current reagents of choice, bis(1,2,4-triazine) ligands. Finally, we summarize research aimed at developing a fundamental understanding of the underlying reasons for the excellent extraction capabilities and high actinide/lanthanide selectivities shown by this class of ligands and our recent efforts to immobilize these reagents onto solid phases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pocket Data Mining (PDM) is our new term describing collaborative mining of streaming data in mobile and distributed computing environments. With sheer amounts of data streams are now available for subscription on our smart mobile phones, the potential of using this data for decision making using data stream mining techniques has now been achievable owing to the increasing power of these handheld devices. Wireless communication among these devices using Bluetooth and WiFi technologies has opened the door wide for collaborative mining among the mobile devices within the same range that are running data mining techniques targeting the same application. This paper proposes a new architecture that we have prototyped for realizing the significant applications in this area. We have proposed using mobile software agents in this application for several reasons. Most importantly the autonomic intelligent behaviour of the agent technology has been the driving force for using it in this application. Other efficiency reasons are discussed in details in this paper. Experimental results showing the feasibility of the proposed architecture are presented and discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data and a data warehouse. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular we look at two aspects, first how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories --- this is an important and challenging aspect of P-found because the data volumes involved are too large to be centralised. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling new scientific discoveries.