854 resultados para discretionary considerations in appointing assessor
Resumo:
On January 11, 2008, the National Institutes of Health ('NIH') adopted a revised Public Access Policy for peer-reviewed journal articles reporting research supported in whole or in part by NIH funds. Under the revised policy, the grantee shall ensure that a copy of the author's final manuscript, including any revisions made during the peer review process, be electronically submitted to the National Library of Medicine's PubMed Central ('PMC') archive and that the person submitting the manuscript will designate a time not later than 12 months after publication at which NIH may make the full text of the manuscript publicly accessible in PMC. NIH adopted this policy to implement a new statutory requirement under which: The Director of the National Institutes of Health shall require that all investigators funded by the NIH submit or have submitted for them to the National Library of Medicine's PubMed Central an electronic version of their final, peer-reviewed manuscripts upon acceptance for publication to be made publicly available no later than 12 months after the official date of publication: Provided, That the NIH shall implement the public access policy in a manner consistent with copyright law. This White Paper is written primarily for policymaking staff in universities and other institutional recipients of NIH support responsible for ensuring compliance with the Public Access Policy. The January 11, 2008, Public Access Policy imposes two new compliance mandates. First, the grantee must ensure proper manuscript submission. The version of the article to be submitted is the final version over which the author has control, which must include all revisions made after peer review. The statutory command directs that the manuscript be submitted to PMC 'upon acceptance for publication.' That is, the author's final manuscript should be submitted to PMC at the same time that it is sent to the publisher for final formatting and copy editing. Proper submission is a two-stage process. The electronic manuscript must first be submitted through a process that requires input of additional information concerning the article, the author(s), and the nature of NIH support for the research reported. NIH then formats the manuscript into a uniform, XML-based format used for PMC versions of articles. In the second stage of the submission process, NIH sends a notice to the Principal Investigator requesting that the PMC-formatted version be reviewed and approved. Only after such approval has grantee's manuscript submission obligation been satisfied. Second, the grantee also has a distinct obligation to grant NIH copyright permission to make the manuscript publicly accessible through PMC not later than 12 months after the date of publication. This obligation is connected to manuscript submission because the author, or the person submitting the manuscript on the author's behalf, must have the necessary rights under copyright at the time of submission to give NIH the copyright permission it requires. This White Paper explains and analyzes only the scope of the grantee's copyright-related obligations under the revised Public Access Policy and suggests six options for compliance with that aspect of the grantee's obligation. Time is of the essence for NIH grantees. As a practical matter, the grantee should have a compliance process in place no later than April 7, 2008. More specifically, the new Public Access Policy applies to any article accepted for publication on or after April 7, 2008 if the article arose under (1) an NIH Grant or Cooperative Agreement active in Fiscal Year 2008, (2) direct funding from an NIH Contract signed after April 7, 2008, (3) direct funding from the NIH Intramural Program, or (4) from an NIH employee. In addition, effective May 25, 2008, anyone submitting an application, proposal or progress report to the NIH must include the PMC reference number when citing articles arising from their NIH funded research. (This includes applications submitted to the NIH for the May 25, 2008 and subsequent due dates.) Conceptually, the compliance challenge that the Public Access Policy poses for grantees is easily described. The grantee must depend to some extent upon the author(s) to take the necessary actions to ensure that the grantee is in compliance with the Public Access Policy because the electronic manuscripts and the copyrights in those manuscripts are initially under the control of the author(s). As a result, any compliance option will require an explicit understanding between the author(s) and the grantee about how the manuscript and the copyright in the manuscript are managed. It is useful to conceptually keep separate the grantee's manuscript submission obligation from its copyright permission obligation because the compliance personnel concerned with manuscript management may differ from those responsible for overseeing the author's copyright management. With respect to copyright management, the grantee has the following six options: (1) rely on authors to manage copyright but also to request or to require that these authors take responsibility for amending publication agreements that call for transfer of too many rights to enable the author to grant NIH permission to make the manuscript publicly accessible ('the Public Access License'); (2) take a more active role in assisting authors in negotiating the scope of any copyright transfer to a publisher by (a) providing advice to authors concerning their negotiations or (b) by acting as the author's agent in such negotiations; (3) enter into a side agreement with NIH-funded authors that grants a non-exclusive copyright license to the grantee sufficient to grant NIH the Public Access License; (4) enter into a side agreement with NIH-funded authors that grants a non-exclusive copyright license to the grantee sufficient to grant NIH the Public Access License and also grants a license to the grantee to make certain uses of the article, including posting a copy in the grantee's publicly accessible digital archive or repository and authorizing the article to be used in connection with teaching by university faculty; (5) negotiate a more systematic and comprehensive agreement with the biomedical publishers to ensure either that the publisher has a binding obligation to submit the manuscript and to grant NIH permission to make the manuscript publicly accessible or that the author retains sufficient rights to do so; or (6) instruct NIH-funded authors to submit manuscripts only to journals with binding deposit agreements with NIH or to journals whose copyright agreements permit authors to retain sufficient rights to authorize NIH to make manuscripts publicly accessible.
Resumo:
Weak references are references that do not prevent the object they point to from being garbage collected. Most realistic languages, including Java, SML/NJ, and OCaml to name a few, have some facility for programming with weak references. Weak references are used in implementing idioms like memoizing functions and hash-consing in order to avoid potential memory leaks. However, the semantics of weak references in many languages are not clearly specified. Without a formal semantics for weak references it becomes impossible to prove the correctness of implementations making use of this feature. Previous work by Hallett and Kfoury extends λgc, a language for modeling garbage collection, to λweak, a similar language with weak references. Using this previously formalized semantics for weak references, we consider two issues related to well-behavedness of programs. Firstly, we provide a new, simpler proof of the well-behavedness of the syntactically restricted fragment of λweak defined previously. Secondly, we give a natural semantic criterion for well-behavedness much broader than the syntactic restriction, which is useful as principle for programming with weak references. Furthermore we extend the result, proved in previously of λgc, which allows one to use type-inference to collect some reachable objects that are never used. We prove that this result holds of our language, and we extend this result to allow the collection of weakly-referenced reachable garbage without incurring the computational overhead sometimes associated with collecting weak bindings (e.g. the need to recompute a memoized function). Lastly we use extend the semantic framework to model the key/value weak references found in Haskell and we prove the Haskell is semantics equivalent to a simpler semantics due to the lack of side-effects in our language.
Resumo:
Nearest neighbor retrieval is the task of identifying, given a database of objects and a query object, the objects in the database that are the most similar to the query. Retrieving nearest neighbors is a necessary component of many practical applications, in fields as diverse as computer vision, pattern recognition, multimedia databases, bioinformatics, and computer networks. At the same time, finding nearest neighbors accurately and efficiently can be challenging, especially when the database contains a large number of objects, and when the underlying distance measure is computationally expensive. This thesis proposes new methods for improving the efficiency and accuracy of nearest neighbor retrieval and classification in spaces with computationally expensive distance measures. The proposed methods are domain-independent, and can be applied in arbitrary spaces, including non-Euclidean and non-metric spaces. In this thesis particular emphasis is given to computer vision applications related to object and shape recognition, where expensive non-Euclidean distance measures are often needed to achieve high accuracy. The first contribution of this thesis is the BoostMap algorithm for embedding arbitrary spaces into a vector space with a computationally efficient distance measure. Using this approach, an approximate set of nearest neighbors can be retrieved efficiently - often orders of magnitude faster than retrieval using the exact distance measure in the original space. The BoostMap algorithm has two key distinguishing features with respect to existing embedding methods. First, embedding construction explicitly maximizes the amount of nearest neighbor information preserved by the embedding. Second, embedding construction is treated as a machine learning problem, in contrast to existing methods that are based on geometric considerations. The second contribution is a method for constructing query-sensitive distance measures for the purposes of nearest neighbor retrieval and classification. In high-dimensional spaces, query-sensitive distance measures allow for automatic selection of the dimensions that are the most informative for each specific query object. It is shown theoretically and experimentally that query-sensitivity increases the modeling power of embeddings, allowing embeddings to capture a larger amount of the nearest neighbor structure of the original space. The third contribution is a method for speeding up nearest neighbor classification by combining multiple embedding-based nearest neighbor classifiers in a cascade. In a cascade, computationally efficient classifiers are used to quickly classify easy cases, and classifiers that are more computationally expensive and also more accurate are only applied to objects that are harder to classify. An interesting property of the proposed cascade method is that, under certain conditions, classification time actually decreases as the size of the database increases, a behavior that is in stark contrast to the behavior of typical nearest neighbor classification systems. The proposed methods are evaluated experimentally in several different applications: hand shape recognition, off-line character recognition, online character recognition, and efficient retrieval of time series. In all datasets, the proposed methods lead to significant improvements in accuracy and efficiency compared to existing state-of-the-art methods. In some datasets, the general-purpose methods introduced in this thesis even outperform domain-specific methods that have been custom-designed for such datasets.
Resumo:
The recognition and protection of constitutional rights is a fundamental precept. In Ireland, the right to marry is provided for in the equality provisions of Article 40 of the Irish Constitution (1937). However, lesbians and gay men are denied the right to marry in Ireland. The ‘last word’ on this issue came into being in the High Court in 2006, when Katherine Zappone and Ann Louise Gilligan sought, but failed, to have their Canadian marriage recognised in Ireland. My thesis centres on this constitutional court ruling. So as to contextualise the pursuit of marriage equality in Ireland, I provide details of the Irish trajectory vis-à-vis relationship and family recognition for same-sex couples. In Chapter One, I discuss the methodological orientation of my research, which derives from a critical perspective. Chapter Two denotes my theorisation of the principle of equality and the concept of difference. In Chapter Three, I discuss the history of the institution of marriage in the West with its legislative underpinning. Marriage also has a constitutional underpinning in Ireland, which derives from Article 41 of our Constitution. In Chapter Four, I discuss ways in which marriage and family were conceptualised in Ireland, by looking at historical controversies surrounding the legalisation of contraception and divorce. Chapter Five denotes a Critical Discourse Analysis of the High Court ruling in Zappone and Gilligan. In Chapter Six, I critique text from three genres of discourse, i.e. ‘Letters to the Editor’ regarding same-sex marriage in Ireland, communication from legislators vis-à-vis the 2004 legislative impediment to same-sex marriage in Ireland, and parliamentary debates surrounding the 2010 enactment of civil partnership legislation in Ireland. I conclude my research by reflecting on my methodological and theoretical considerations with a view to answering my research questions. Author’s Update: Following the outcome of the 2015 constitutional referendum vis-à-vis Article 41, marriage equality has been realised in Ireland.
Resumo:
Multiple models, methods and frameworks have been proposed to guide Design Science Research (DSR) application to address relevant classes of problems in Information Systems (IS) discipline. While much of the ambiguity around the research paradigm has been removed, only the surface has been scratched on DSR efforts where researcher takes an active role in organizational and industrial engagement to solve a specific problem and generalize the solution to a class of problems. Such DSR projects can have a significant impact on practice, link theories to real contexts and extend the scope of DSR. Considering these multiform settings, the implications to theorizing nor the crucial role of researcher in the interplay of DSR and IS projects have not been properly addressed. The emergent nature of such projects needs to be further investigated to reach such contributions for both theory and practice. This paper raises multiple theoretical, organizational and managerial considerations for a meta-level monitoring model for emergent DSR projects.
Resumo:
This paper looks into economic insights offerred by considerations of two important financial markets in Vietnam, gold and USD. In general, the paper focuses on time series properties, mainly returns at different frequencies, and test the weak-form efficient market hypothesis. All the test rejects the efficiency of both gold and foreign exchange markets. All time series exhibit strong serial correlations. ARMA-GARCH specifications appear to have performed well with different time series. In all cases the changing volatility phenomenon is strongly supported through empirical data. An additional test is performed on the daily USD return to try to capture the impacts of Asian financial crisis and daily price limits applicable. No substantial impacts of the Asian crisis and the central bank-devised limits are found to influence the risk level of daily USD return.
Resumo:
HYPERJOSEPH combines hypertext, information retrieval, literary studies, Biblical scholarship, and linguistics. Dialectically, this paper contrasts hypertextual form (the extant tool) and AI-captured content (a desideratum), in the HYPERJOSEPH project. The discussion is more general and oriented to epistemology.
Resumo:
A natural approach to representing and reasoning about temporal propositions (i.e., statements with time-dependent truth-values) is to associate them with time elements. In the literature, there are three choices regarding the primitive for the ontology of time: (1) instantaneous points, (2) durative intervals and (3) both points and intervals. Problems may arise when one conflates different views of temporal structure and questions whether some certain types of temporal propositions can be validly and meaningfully associated with different time elements. In this paper, we shall summarize an ontological glossary with respect to time elements, and diversify a wider range of meta-predicates for ascribing temporal propositions to time elements. Based on these, we shall also devise a versatile categorization of temporal propositions, which can subsume those representative categories proposed in the literature, including that of Vendler, of McDermott, of Allen, of Shoham, of Galton and of Terenziani and Torasso. It is demonstrated that the new categorization of propositions, together with the proposed range of meta-predicates, provides the expressive power for modeling some typical temporal terms/phenomena, such as starting-instant, stopping-instant, dividing-instant, instigation, termination and intermingling etc.
Resumo:
Computer egress simulation has potential to be used in large scale incidents to provide live advice to incident commanders. While there are many considerations which must be taken into account when applying such models to live incidents, one of the first concerns the computational speed of simulations. No matter how important the insight provided by the simulation, numerical hindsight will not prove useful to an incident commander. Thus for this type of application to be useful, it is essential that the simulation can be run many times faster than real time. Parallel processing is a method of reducing run times for very large computational simulations by distributing the workload amongst a number of CPUs. In this paper we examine the development of a parallel version of the buildingEXODUS software. The parallel strategy implemented is based on a systematic partitioning of the problem domain onto an arbitrary number of sub-domains. Each sub-domain is computed on a separate processor and runs its own copy of the EXODUS code. The software has been designed to work on typical office based networked PCs but will also function on a Windows based cluster. Two evaluation scenarios using the parallel implementation of EXODUS are described; a large open area and a 50 story high-rise building scenario. Speed-ups of up to 3.7 are achieved using up to six computers, with high-rise building evacuation simulation achieving run times of 6.4 times faster than real time.
Resumo:
Comments on the Chancery Division decision in Horsham Properties Group Ltd v Clark on whether a mortgagee's exercise of its contractual right, on the mortgagor falling into arrears, to appoint receivers such that the property could be sold and possession obtained without triggering the court's discretionary powers pursuant to the Administration of Justice Act 1970 s.36 infringed the mortgagor's rights under the European Convention on Human Rights 1950 Protocol 1 art.1. Considers the implications of proposed reforms recasting the mortgagee's right to possession as a discretionary remedy. [From Legal Journals Index]
Resumo:
Major and trace elemental composition provides a powerful basis for forensic comparison of soils, sediments and rocks. However, it is important that the potential 'errors' associated with the procedures are fully understood and quantified, and that standard protocols are applied for sample preparation and analysis. This paper describes such a standard procedure and reports results both for instrumental measurement precision (repeatability) and overall 'method' precision (reproducibility). Results obtained both for certified reference materials and example soils show that the instrumental measurement precision (defined by the coefficient of variation, CV) for most elements is better than 2-3%. When different solutions were prepared from the same sample powder, and from different sub-sample powders prepared from the same parent sample, the CV increased to c. 5-6% for many elements. The largest variation was found in results for certified reference materials generated from 23 instrument runs over an 18 month period (mean CV=c. 11%). Some elements were more variable than others. W was found to be the most variable and the elements V, Cr, Co, Cu, Ni and Pb also showed higher than average variability. SiO2, CaO, Al2O3 and Fe2O3, Rb, Sr, La, Ce, Nd and Sm generally showed lower than average variability, and therefore provided the most reliable basis for inter-sample comparison. It is recommended that, whenever possible, samples relating to the same investigation should be analysed in the same sample run, or at least sequential runs.
Resumo:
Stereology typically concerns estimation of properties of a geometric structure from plane section information. This paperprovides a brief review of some statistical aspects of this rapidly developing field, with some reference to applications in the earth sciences. After an introductory discussion of the scope of stereology, section 2 briefly mentions results applicable when no assumptions can be made about the stochastic nature of the sampled matrix, statistical considerations then arising solelyfrom the ‘randomness’ of the plane section. The next two sections postulate embedded particles of specific shapes, the particular case of spheres being discussed in some detail. References are made to results for ‘thin slices’ and other prob-ing mechanisms. Randomly located convex particles, of otherwise arbitrary shape, are discussed in section 5 and the review concludes with a specific application of stereological ideas to some data on neolithic mining.
Resumo:
The nematode/copepod ratio is critically examined with a view to adding some precision to its proposed use in pollution ecology. At two unpolluted intertidal sites, differing markedly in sediment grade, the metabolic requirements of copepods are shown to be equivalent to the requirements of that fraction of the nematode population which feeds in the same way. The partitioning of this total energy requirement among individuals depends on the distribution of individual metabolic body sizes and the relative rates of metabolism. The distribution of body sizes is constrained by the sediment granulometry, which affects nematodes and copepods differently. These considerations enable precise predictions of the nematode/copepod ratios expected in unpolluted situations, against which observed ratios can be compared.
Resumo:
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situRrs as input to the models, the performance of eleven semi-analytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional qualitative considerations for algorithm selection for climate-change studies. Our classification has the potential to be routinely implemented, such that the performance of emerging algorithms can be compared with existing algorithms as they become available. In the long-term, such an approach will further aid algorithm development for ocean-colour studies.
Resumo:
Ocean warming can modify the ecophysiology and distribution of marine organisms, and relationships between species, with nonlinear interactions between ecosystem components potentially resulting in trophic amplification. Trophic amplification (or attenuation) describe the propagation of a hydroclimatic signal up the food web, causing magnification (or depression) of biomass values along one or more trophic pathways. We have employed 3-D coupled physical-biogeochemical models to explore ecosystem responses to climate change with a focus on trophic amplification. The response of phytoplankton and zooplankton to global climate-change projections, carried out with the IPSL Earth System Model by the end of the century, is analysed at global and regional basis, including European seas (NE Atlantic, Barents Sea, Baltic Sea, Black Sea, Bay of Biscay, Adriatic Sea, Aegean Sea) and the Eastern Boundary Upwelling System (Benguela). Results indicate that globally and in Atlantic Margin and North Sea, increased ocean stratification causes primary production and zooplankton biomass to decrease in response to a warming climate, whilst in the Barents, Baltic and Black Seas, primary production and zooplankton biomass increase. Projected warming characterized by an increase in sea surface temperature of 2.29 ± 0.05 °C leads to a reduction in zooplankton and phytoplankton biomasses of 11% and 6%, respectively. This suggests negative amplification of climate driven modifications of trophic level biomass through bottom-up control, leading to a reduced capacity of oceans to regulate climate through the biological carbon pump. Simulations suggest negative amplification is the dominant response across 47% of the ocean surface and prevails in the tropical oceans; whilst positive trophic amplification prevails in the Arctic and Antarctic oceans. Trophic attenuation is projected in temperate seas. Uncertainties in ocean plankton projections, associated to the use of single global and regional models, imply the need for caution when extending these considerations into higher trophic levels.