913 resultados para LEVEL SET METHODS
Resumo:
Novel imaging techniques are playing an increasingly important role in drug development, providing insight into the mechanism of action of new chemical entities. The data sets obtained by these methods can be large with complex inter-relationships, but the most appropriate statistical analysis for handling this data is often uncertain - precisely because of the exploratory nature of the way the data are collected. We present an example from a clinical trial using magnetic resonance imaging to assess changes in atherosclerotic plaques following treatment with a tool compound with established clinical benefit. We compared two specific approaches to handle the correlations due to physical location and repeated measurements: two-level and four-level multilevel models. The two methods identified similar structural variables, but higher level multilevel models had the advantage of explaining a greater proportion of variation, and the modeling assumptions appeared to be better satisfied.
Resumo:
Elephant poaching and the ivory trade remain high on the agenda at meetings of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Well-informed debates require robust estimates of trends, the spatial distribution of poaching, and drivers of poaching. We present an analysis of trends and drivers of an indicator of elephant poaching of all elephant species. The site-based monitoring system known as Monitoring the Illegal Killing of Elephants (MIKE), set up by the 10th Conference of the Parties of CITES in 1997, produces carcass encounter data reported mainly by anti-poaching patrols. Data analyzed were site by year totals of 6,337 carcasses from 66 sites in Africa and Asia from 2002–2009. Analysis of these observational data is a serious challenge to traditional statistical methods because of the opportunistic and non-random nature of patrols, and the heterogeneity across sites. Adopting a Bayesian hierarchical modeling approach, we used the proportion of carcasses that were illegally killed (PIKE) as a poaching index, to estimate the trend and the effects of site- and country-level factors associated with poaching. Important drivers of illegal killing that emerged at country level were poor governance and low levels of human development, and at site level, forest cover and area of the site in regions where human population density is low. After a drop from 2002, PIKE remained fairly constant from 2003 until 2006, after which it increased until 2008. The results for 2009 indicate a decline. Sites with PIKE ranging from the lowest to the highest were identified. The results of the analysis provide a sound information base for scientific evidence-based decision making in the CITES process.
Resumo:
There is intense scientific and public interest in the Intergovernmental Panel on Climate Change (IPCC) projections of sea level for the twenty-first century and beyond. The Fourth Assessment Report (AR4) projections, obtained by applying standard methods to the results of the World Climate Research Programme Coupled Model Experiment, includes estimates of ocean thermal expansion, the melting of glaciers and ice caps (G&ICs), increased melting of the Greenland Ice Sheet, and increased precipitation over Greenland and Antarctica, partially offsetting other contributions. The AR4 recognized the potential for a rapid dynamic ice sheet response but robust methods for quantifying it were not available. Illustrative scenarios suggested additional sea level rise on the order of 10 to 20 cm or more, giving a wide range in the global averaged projections of about 20 to 80 cm by 2100. Currently, sea level is rising at a rate near the upper end of these projections. Since publication of the AR4 in 2007, biases in historical ocean temperature observations have been identified and significantly reduced, resulting in improved estimates of ocean thermal expansion. Models that include all climate forcings are in good agreement with these improved observations and indicate the importance of stratospheric aerosol loadings from volcanic eruptions. Estimates of the volumes of G&ICs and their contributions to sea level rise have improved. Results from recent (but possibly incomplete) efforts to develop improved ice sheet models should be available for the 2013 IPCC projections. Improved understanding of sea level rise is paving the way for using observations to constrain projections. Understanding of the regional variations in sea level change as a result of changes in ocean properties, wind-stress patterns, and heat and freshwater inputs into the ocean is improving. Recently, estimates of sea level changes resulting from changes in Earth's gravitational field and the solid Earth response to changes in surface loading have been included in regional projections. While potentially valuable, semi-empirical models have important limitations, and their projections should be treated with caution
Resumo:
The DNA G-qadruplexes are one of the targets being actively explored for anti-cancer therapy by inhibiting them through small molecules. This computational study was conducted to predict the binding strengths and orientations of a set of novel dimethyl-amino-ethyl-acridine (DACA) analogues that are designed and synthesized in our laboratory, but did not diffract in Synchrotron light.Thecrystal structure of DNA G-Quadruplex(TGGGGT)4(PDB: 1O0K) was used as target for their binding properties in our studies.We used both the force field (FF) and QM/MM derived atomic charge schemes simultaneously for comparing the predictions of drug binding modes and their energetics. This study evaluates the comparative performance of fixed point charge based Glide XP docking and the quantum polarized ligand docking schemes. These results will provide insights on the effects of including or ignoring the drug-receptor interfacial polarization events in molecular docking simulations, which in turn, will aid the rational selection of computational methods at different levels of theory in future drug design programs. Plenty of molecular modelling tools and methods currently exist for modelling drug-receptor or protein-protein, or DNA-protein interactionssat different levels of complexities.Yet, the capasity of such tools to describevarious physico-chemical propertiesmore accuratelyis the next step ahead in currentresearch.Especially, the usage of most accurate methods in quantum mechanics(QM) is severely restricted by theirtedious nature. Though the usage of massively parallel super computing environments resulted in a tremendous improvement in molecular mechanics (MM) calculations like molecular dynamics,they are still capable of dealing with only a couple of tens to hundreds of atoms for QM methods. One such efficient strategy that utilizes thepowers of both MM and QM are the QM/MM hybrid methods. Lately, attempts have been directed towards the goal of deploying several different QM methods for betterment of force field based simulations, but with practical restrictions in place. One of such methods utilizes the inclusion of charge polarization events at the drug-receptor interface, that is not explicitly present in the MM FF.
Resumo:
We consider two weakly coupled systems and adopt a perturbative approach based on the Ruelle response theory to study their interaction. We propose a systematic way of parameterizing the effect of the coupling as a function of only the variables of a system of interest. Our focus is on describing the impacts of the coupling on the long term statistics rather than on the finite-time behavior. By direct calculation, we find that, at first order, the coupling can be surrogated by adding a deterministic perturbation to the autonomous dynamics of the system of interest. At second order, there are additionally two separate and very different contributions. One is a term taking into account the second-order contributions of the fluctuations in the coupling, which can be parameterized as a stochastic forcing with given spectral properties. The other one is a memory term, coupling the system of interest to its previous history, through the correlations of the second system. If these correlations are known, this effect can be implemented as a perturbation with memory on the single system. In order to treat this case, we present an extension to Ruelle's response theory able to deal with integral operators. We discuss our results in the context of other methods previously proposed for disentangling the dynamics of two coupled systems. We emphasize that our results do not rely on assuming a time scale separation, and, if such a separation exists, can be used equally well to study the statistics of the slow variables and that of the fast variables. By recursively applying the technique proposed here, we can treat the general case of multi-level systems.
Resumo:
What constitutes a baseline level of success for protein fold recognition methods? As fold recognition benchmarks are often presented without any thought to the results that might be expected from a purely random set of predictions, an analysis of fold recognition baselines is long overdue. Given varying amounts of basic information about a protein—ranging from the length of the sequence to a knowledge of its secondary structure—to what extent can the fold be determined by intelligent guesswork? Can simple methods that make use of secondary structure information assign folds more accurately than purely random methods and could these methods be used to construct viable hierarchical classifications?
Resumo:
Specific traditional plate count method and real-time PCR systems based on SYBR Green I and TaqMan technologies using a specific primer pair and probe for amplification of iap-gene were used for quantitative assay of Listeria monocytogenes in seven decimal serial dilution series of nutrient broth and milk samples containing 1.58 to 1.58×107 cfu /ml and the real-time PCR methods were compared with the plate count method with respect to accuracy and sensitivity. In this study, the plate count method was performed using surface-plating of 0.1 ml of each sample on Palcam Agar. The lowest detectable level for this method was 1.58×10 cfu/ml for both nutrient broth and milk samples. Using purified DNA as a template for generation of standard curves, as few as four copies of the iap-gene could be detected per reaction with both real-time PCR assays, indicating that they were highly sensitive. When these real-time PCR assays were applied to quantification of L. monocytogenes in decimal serial dilution series of nutrient broth and milk samples, 3.16×10 to 3.16×105 copies per reaction (equals to 1.58×103 to 1.58×107 cfu/ml L. monocytogenes) were detectable. As logarithmic cycles, for Plate Count and both molecular assays, the quantitative results of the detectable steps were similar to the inoculation levels.
Resumo:
In this study, we compare two different cyclone-tracking algorithms to detect North Atlantic polar lows, which are very intense mesoscale cyclones. Both approaches include spatial filtering, detection, tracking and constraints specific to polar lows. The first method uses digital bandpass-filtered mean sea level pressure (MSLP) fieldsin the spatial range of 200�600 km and is especially designed for polar lows. The second method also uses a bandpass filter but is based on the discrete cosine transforms (DCT) and can be applied to MSLP and vorticity fields. The latter was originally designed for cyclones in general and has been adapted to polar lows for this study. Both algorithms are applied to the same regional climate model output fields from October 1993 to September 1995 produced from dynamical downscaling of the NCEP/NCAR reanalysis data. Comparisons between these two methods show that different filters lead to different numbers and locations of tracks. The DCT is more precise in scale separation than the digital filter and the results of this study suggest that it is more suited for the bandpass filtering of MSLP fields. The detection and tracking parts also influence the numbers of tracks although less critically. After a selection process that applies criteria to identify tracks of potential polar lows, differences between both methods are still visible though the major systems are identified in both.
Resumo:
Diaminofluoresceins are widely used probes for detection and intracellular localization of NO formation in cultured/isolated cells and intact tissues. The fluorinated derivative, 4-amino-5-methylamino-2′,7′-difluorofluorescein (DAF-FM), has gained increasing popularity in recent years due to its improved NO-sensitivity, pH-stability, and resistance to photo-bleaching compared to the first-generation compound, DAF-2. Detection of NO production by either reagent relies on conversion of the parent compound into a fluorescent triazole, DAF-FM-T and DAF-2-T, respectively. While this reaction is specific for NO and/or reactive nitrosating species, it is also affected by the presence of oxidants/antioxidants. Moreover, the reaction with other molecules can lead to the formation of fluorescent products other than the expected triazole. Thus additional controls and structural confirmation of the reaction products are essential. Using human red blood cells as an exemplary cellular system we here describe robust protocols for the analysis of intracellular DAF-FM-T formation using an array of fluorescence-based methods (laser-scanning fluorescence microscopy, flow cytometry and fluorimetry) and analytical separation techniques (reversed-phase HPLC and LC-MS/MS). When used in combination, these assays afford unequivocal identification of the fluorescent signal as being derived from NO and are applicable to most other cellular systems without or with only minor modifications.
Resumo:
The chapter starts from the premise that an historically- and institutionally-formed orientation to music education at primary level in European countries privileges a nineteenth century Western European music aesthetic, with its focus on formal characteristics such as melody and rhythm. While there is a move towards a multi-faceted understanding of musical ability, a discrete intelligence and willingness to accept musical styles or 'open-earedness', there remains a paucity of documented evidence of this in research at primary school level. To date there has been no study undertaken which has the potential to provide policy makers and practitioners with insights into the degree of homogeneity or universality in conceptions of musical ability within this educational sector. Against this background, a study was set up to explore the following research questions: 1. What conceptions of musical ability do primary teachers hold a) of themselves and; b) of their pupils? 2. To what extent are these conceptions informed by Western classical practices? A mixed methods approach was used which included survey questionnaire and semi-structured interview. Questionnaires have been sent to all classroom teachers in a random sample of primary schools in the South East of England. This was followed up with a series of semi-structured interviews with a sub-sample of respondents. The main ideas are concerned with the attitudes, beliefs and working theories held by teachers in contemporary primary school settings. By mapping the extent to which a knowledge base for teaching can be resistant to change in schools, we can problematise primary schools as sites for diversity and migration of cultural ideas. Alongside this, we can use the findings from the study undertaken in an English context as a starting point for further investigation into conceptions of music, musical ability and assessment held by practitioners in a variety of primary school contexts elsewhere in Europe; our emphasis here will be on the development of shared understanding in terms of policies and practices in music education. Within this broader framework, our study can have a significant impact internationally, with potential to inform future policy making, curriculum planning and practice.
Resumo:
This article analyses the results of an empirical study on the 200 most popular UK-based websites in various sectors of e-commerce services. The study provides empirical evidence on unlawful processing of personal data. It comprises a survey on the methods used to seek and obtain consent to process personal data for direct marketing and advertisement, and a test on the frequency of unsolicited commercial emails (UCE) received by customers as a consequence of their registration and submission of personal information to a website. Part One of the article presents a conceptual and normative account of data protection, with a discussion of the ethical values on which EU data protection law is grounded and an outline of the elements that must be in place to seek and obtain valid consent to process personal data. Part Two discusses the outcomes of the empirical study, which unveils a significant departure between EU legal theory and practice in data protection. Although a wide majority of the websites in the sample (69%) has in place a system to ask separate consent for engaging in marketing activities, it is only 16.2% of them that obtain a consent which is valid under the standards set by EU law. The test with UCE shows that only one out of three websites (30.5%) respects the will of the data subject not to receive commercial communications. It also shows that, when submitting personal data in online transactions, there is a high probability (50%) of incurring in a website that will ignore the refusal of consent and will send UCE. The article concludes that there is severe lack of compliance of UK online service providers with essential requirements of data protection law. In this respect, it suggests that there is inappropriate standard of implementation, information and supervision by the UK authorities, especially in light of the clarifications provided at EU level.
Resumo:
Background Cortical cultures grown long-term on multi-electrode arrays (MEAs) are frequently and extensively used as models of cortical networks in studies of neuronal firing activity, neuropharmacology, toxicology and mechanisms underlying synaptic plasticity. However, in contrast to the predominantly asynchronous neuronal firing activity exhibited by intact cortex, electrophysiological activity of mature cortical cultures is dominated by spontaneous epileptiform-like global burst events which hinders their effective use in network-level studies, particularly for neurally-controlled animat (‘artificial animal’) applications. Thus, the identification of culture features that can be exploited to produce neuronal activity more representative of that seen in vivo could increase the utility and relevance of studies that employ these preparations. Acetylcholine has a recognised neuromodulatory role affecting excitability, rhythmicity, plasticity and information flow in vivo although its endogenous production by cortical cultures and subsequent functional influence upon neuronal excitability remains unknown. Results Consequently, using MEA electrophysiological recording supported by immunohistochemical and RT-qPCR methods, we demonstrate for the first time, the presence of intrinsic cholinergic neurons and significant, endogenous cholinergic tone in cortical cultures with a characterisation of the muscarinic and nicotinic components that underlie modulation of spontaneous neuronal activity. We found that tonic muscarinic ACh receptor (mAChR) activation affects global excitability and burst event regularity in a culture age-dependent manner whilst, in contrast, tonic nicotinic ACh receptor (nAChR) activation can modulate burst duration and the proportion of spikes occurring within bursts in a spatio-temporal fashion. Conclusions We suggest that the presence of significant endogenous cholinergic tone in cortical cultures and the comparability of its modulatory effects to those seen in intact brain tissues support emerging, exploitable commonalities between in vivo and in vitro preparations. We conclude that experimental manipulation of endogenous cholinergic tone could offer a novel opportunity to improve the use of cortical cultures for studies of network-level mechanisms in a manner that remains largely consistent with its functional role.
Resumo:
Health care provision is significantly impacted by the ability of the health providers to engineer a viable healthcare space to support care stakeholders needs. In this paper we discuss and propose use of organisational semiotics as a set of methods to link stakeholders to systems, which allows us to capture clinician activity, information transfer, and building use; which in tern allows us to define the value of specific systems in the care environment to specific stakeholders and the dependence between systems in a care space. We suggest use of a semantically enhanced building information model (BIM) to support the linking of clinician activity to the physical resource objects and space; and facilitate the capture of quantifiable data, over time, concerning resource use by key stakeholders. Finally we argue for the inclusion of appropriate stakeholder feedback and persuasive mechanism, to incentivise building user behaviour to support organisational level sustainability policy.
Resumo:
The objective of this book is to present the quantitative techniques that are commonly employed in empirical finance research together with real world, state of the art research examples. Each chapter is written by international experts in their fields. The unique approach is to describe a question or issue in finance and then to demonstrate the methodologies that may be used to solve it. All of the techniques described are used to address real problems rather than being presented for their own sake and the areas of application have been carefully selected so that a broad range of methodological approaches can be covered. This book is aimed primarily at doctoral researchers and academics who are engaged in conducting original empirical research in finance. In addition, the book will be useful to researchers in the financial markets and also advanced Masters-level students who are writing dissertations.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.