929 resultados para scientific computation
Resumo:
Marine reserves, often referred to as no-take MPAs, are defined as areas within which human activities that can result in the removal or alteration of biotic and abiotic components of an ecosystem are prohibited or greatly restricted (NRC 2001). Activities typically curtailed within a marine reserve are extraction of organisms (e.g., commercial and recreational fishing, kelp harvesting, commercial collecting), mariculture, and those activities that can alter oceanographic or geologic attributes of the habitat (e.g., mining, shore-based industrial-related intake and discharges of seawater and effluent). Usually, marine reserves are established to conserve biodiversity or enhance nearby fishery resources. Thus, goals and objectives of marine reserves can be inferred, even if they are not specifically articulated at the time of reserve formation. In this report, we review information about the effectiveness of the three marine reserves in the Monterey Bay National Marine Sanctuary (Hopkins Marine Life Refuge, Point Lobos Ecological Reserve, Big Creek Ecological Reserve), and the one in the Channel Islands National Marine Sanctuary (the natural area on the north side of East Anacapa Island). Our efforts to objectively evaluate reserves in Central California relative to reserve theory were greatly hampered for four primary reasons; (1) few of the existing marine reserves were created with clearly articulated goals or objectives, (2) relatively few studies of the ecological consequences of existing reserves have been conducted, (3) no studies to date encompass the spatial and temporal scope needed to identify ecosystem-wide effects of reserve protection, and (4) there are almost no studies that describe the social and economic consequences of existing reserves. To overcome these obstacles, we used several methods to evaluate the effectiveness of subtidal marine reserves in Central California. We first conducted a literature review to find out what research has been conducted in all marine reserves in Central California (Appendix 1). We then reviewed the scientific literature that relates to marine reserve theory to help define criteria to use as benchmarks for evaluation. A recent National Research Council (2001) report summarized expected reserve benefits and provided the criteria we used for evaluation of effectiveness. The next step was to identify the research projects in this region that collected information in a way that enabled us to evaluate reserve theory relative to marine reserves in Central California. Chapters 1-4 in this report provide summaries of those research projects. Contained within these chapters are evaluations of reserve effectiveness for meeting specific objectives. As few studies exist that pertain to reserve theory in Central California, we reviewed studies of marine reserves in other temperate and tropical ecosystems to determine if there were lessons to be learned from other parts of the world (Chapter 5). We also included a discussion of social and economic considerations germane to the public policy decision-making processes associated with marine reserves (Chapter 6). After reviewing all of these resources, we provided a summary of the ecological benefits that could be expected from existing reserves in Central California. The summary is presented in Part II of this report. (PDF contains 133 pages.)
Resumo:
This study was undertaken by UKOLN on behalf of the Joint Information Systems Committee (JISC) in the period April to September 2008. Application profiles are metadata schemata which consist of data elements drawn from one or more namespaces, optimized for a particular local application. They offer a way for particular communities to base the interoperability specifications they create and use for their digital material on established open standards. This offers the potential for digital materials to be accessed, used and curated effectively both within and beyond the communities in which they were created. The JISC recognized the need to undertake a scoping study to investigate metadata application profile requirements for scientific data in relation to digital repositories, and specifically concerning descriptive metadata to support resource discovery and other functions such as preservation. This followed on from the development of the Scholarly Works Application Profile (SWAP) undertaken within the JISC Digital Repositories Programme and led by Andy Powell (Eduserv Foundation) and Julie Allinson (RRT UKOLN) on behalf of the JISC. Aims and Objectives 1.To assess whether a single metadata AP for research data, or a small number thereof, would improve resource discovery or discovery-to-delivery in any useful or significant way. 2.If so, then to:a.assess whether the development of such AP(s) is practical and if so, how much effort it would take; b.scope a community uptake strategy that is likely to be successful, identifying the main barriers and key stakeholders. 3.Otherwise, to investigate how best to improve cross-discipline, cross-community discovery-to-delivery for research data, and make recommendations to the JISC and others as appropriate. Approach The Study used a broad conception of what constitutes scientific data, namely data gathered, collated, structured and analysed using a recognizably scientific method, with a bias towards quantitative methods. The approach taken was to map out the landscape of existing data centres, repositories and associated projects, and conduct a survey of the discovery-to-delivery metadata they use or have defined, alongside any insights they have gained from working with this metadata. This was followed up by a series of unstructured interviews, discussing use cases for a Scientific Data Application Profile, and how widely a single profile might be applied. On the latter point, matters of granularity, the experimental/measurement contrast, the quantitative/qualitative contrast, the raw/derived data contrast, and the homogeneous/heterogeneous data collection contrast were discussed. The Study report was loosely structured according to the Singapore Framework for Dublin Core Application Profiles, and in turn considered: the possible use cases for a Scientific Data Application Profile; existing domain models that could either be used or adapted for use within such a profile; and a comparison existing metadata profiles and standards to identify candidate elements for inclusion in the description set profile for scientific data. The report also considered how the application profile might be implemented, its relationship to other application profiles, the alternatives to constructing a Scientific Data Application Profile, the development effort required, and what could be done to encourage uptake in the community. The conclusions of the Study were validated through a reference group of stakeholders.
Resumo:
Presentation slides as part of the Janet network end-to-end performance initiative
Resumo:
The report provides recommendations to policy makers in science and scholarly research regarding IPR policy to increase the impact of research and make the outcomes more available. The report argues that the impact of publicly-funded research outputs can be increased through a fairer balance between private and public interest in copyright legislation. This will allow for wider access to and easier re-use of published research reports. The common practice of authors being required to assign all rights to a publisher restricts the impact of research outputs and should be replaced by wider use of a non-exclusive licence. Full access and re-use rights to research data should be encouraged through use of a research-friendly licence.
Resumo:
Presentation to elected officials [and American Fisheries Society] on the wealth of research to be done in the Chesapeake Bay. Citing drop in oyster production from a high of 17,000,000 bushels in 1885 to 2,000,000 bushels in 1925 or one-eighth of its one-time abundance. Citing water studies through the late 1880's-90's. Report of experiments with the Japanese Oyster O. gigas. Also addresses Crab, Callinectes sapidus and classes held. (PDF contains 7 pages)
Resumo:
Along with the vast progress in experimental quantum technologies there is an increasing demand for the quantification of entanglement between three or more quantum systems. Theory still does not provide adequate tools for this purpose. The objective is, besides the quest for exact results, to develop operational methods that allow for efficient entanglement quantification. Here we put forward an analytical approach that serves both these goals. We provide a simple procedure to quantify Greenberger-Horne-Zeilinger-type multipartite entanglement in arbitrary three-qubit states. For two qubits this method is equivalent to Wootters' seminal result for the concurrence. It establishes a close link between entanglement quantification and entanglement detection by witnesses, and can be generalised both to higher dimensions and to more than three parties.
Resumo:
A three-dimensional MHD solver is described in the paper. The solver simulates reacting flows with nonequilibrium between translational-rotational, vibrational and electron translational modes. The conservation equations are discretized with implicit time marching and the second-order modified Steger-Warming scheme, and the resulted linear system is solved iteratively with Newton-Krylov-Schwarz method that is implemented by PETSc package. The results of convergence tests are plotted, which show good scalability and convergence around twice faster when compared with the DPLR method. Then five test runs are conducted simulating the experiments done at the NASA Ames MHD channel, and the calculated pressures, temperatures, electrical conductivity, back EMF, load factors and flow accelerations are shown to agree with the experimental data. Our computation shows that the electrical conductivity distribution is not uniform in the powered section of the MHD channel, and that it is important to include Joule heating in order to calculate the correct conductivity and the MHD acceleration.
Resumo:
The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.
It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.
The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.