864 resultados para Sharable Content Object Resource Model (SCORM)
The Zebrafish Information Network (ZFIN): a resource for genetic, genomic and developmental research
Resumo:
The Zebrafish Information Network, ZFIN, is a WWW community resource of zebrafish genetic, genomic and developmental research information (http://zfin.org). ZFIN provides an anatomical atlas and dictionary, developmental staging criteria, research methods, pathology information and a link to the ZFIN relational database (http://zfin.org/ZFIN/). The database, built on a relational, object-oriented model, provides integrated information about mutants, genes, genetic markers, mapping panels, publications and contact information for the zebrafish research community. The database is populated with curated published data, user submitted data and large dataset uploads. A broad range of data types including text, images, graphical representations and genetic maps supports the data. ZFIN incorporates links to other genomic resources that provide sequence and ortholog data. Zebrafish nomenclature guidelines and an automated registration mechanism for new names are provided. Extensive usability testing has resulted in an easy to learn and use forms interface with complex searching capabilities.
Resumo:
After reviewing the Present Value Model (PVM), in its basic form and with its major extensions, the authors carried out a literature review on the instrumental uses of farm land prices; namely what land prices may reveal in the framework of the PVM. Urban influence, non-market goods and climate change are topics where the PVM used with applied data may reveal farmers’ or landowners’ beliefs or subjective values, which are discussed in this paper. There is also extensive discussion of the topic of public regulations, and how they may affect land price directly, or through its present value.
Resumo:
This paper describes a practical application of MDA and reverse engineering based on a domain-specific modelling language. A well defined metamodel of a domain-specific language is useful for verification and validation of associated tools. We apply this approach to SIFA, a security analysis tool. SIFA has evolved as requirements have changed, and it has no metamodel. Hence, testing SIFA’s correctness is difficult. We introduce a formal metamodelling approach to develop a well-defined metamodel of the domain. Initially, we develop a domain model in EMF by reverse engineering the SIFA implementation. Then we transform EMF to Object-Z using model transformation. Finally, we complete the Object-Z model by specifying system behavior. The outcome is a well-defined metamodel that precisely describes the domain and the security properties that it analyses. It also provides a reliable basis for testing the current SIFA implementation and forward engineering its successor.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
In the analysis of equilibrium policies in a di erential game, if agents have different time preference rates, the cooperative (Pareto optimum) solution obtained by applying the Pontryagin's Maximum Principle becomes time inconsistent. In this work we derive a set of dynamic programming equations (in discrete and continuous time) whose solutions are time consistent equilibrium rules for N-player cooperative di erential games in which agents di er in their instantaneous utility functions and also in their discount rates of time preference. The results are applied to the study of a cake-eating problem describing the management of a common property exhaustible natural resource. The extension of the results to a simple common property renewable natural resource model in in nite horizon is also discussed.
Resumo:
In the analysis of equilibrium policies in a di erential game, if agents have different time preference rates, the cooperative (Pareto optimum) solution obtained by applying the Pontryagin's Maximum Principle becomes time inconsistent. In this work we derive a set of dynamic programming equations (in discrete and continuous time) whose solutions are time consistent equilibrium rules for N-player cooperative di erential games in which agents di er in their instantaneous utility functions and also in their discount rates of time preference. The results are applied to the study of a cake-eating problem describing the management of a common property exhaustible natural resource. The extension of the results to a simple common property renewable natural resource model in in nite horizon is also discussed.
Resumo:
Currently, individuals including designers, contractors, and owners learn about the project requirements by studying a combination of paper and electronic copies of the construction documents including the drawings, specifications (standard and supplemental), road and bridge standard drawings, design criteria, contracts, addenda, and change orders. This can be a tedious process since one needs to go back and forth between the various documents (paper or electronic) to obtain information about the entire project. Object-oriented computer-aided design (OO-CAD) is an innovative technology that can bring a change to this process by graphical portrayal of information. OO-CAD allows users to point and click on portions of an object-oriented drawing that are then linked to relevant databases of information (e.g., specifications, procurement status, and shop drawings). The vision of this study is to turn paper-based design standards and construction specifications into an object-oriented design and specification (OODAS) system or a visual electronic reference library (ERL). Individuals can use the system through a handheld wireless book-size laptop that includes all of the necessary software for operating in a 3D environment. All parties involved in transportation projects can access all of the standards and requirements simultaneously using a 3D graphical interface. By using this system, users will have all of the design elements and all of the specifications readily available without concerns of omissions. A prototype object-oriented model was created and demonstrated to potential users representing counties, cities, and the state. Findings suggest that a system like this could improve productivity to find information by as much as 75% and provide a greater sense of confidence that all relevant information had been identified. It was also apparent that this system would be used by more people in construction than in design. There was also concern related to the cost to develop and maintain the complete system. The future direction should focus on a project-based system that can help the contractors and DOT inspectors find information (e.g., road standards, specifications, instructional memorandums) more rapidly as it pertains to a specific project.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
The purpose of this thesis is to explore a different kind of digital content management model and to propose a process in order to manage properly the content on an organization’s website. This process also defines briefly the roles and responsibilities of the different actors implicated. In order to create this process, the thesis has been divided into two parts. First, the theoretical analysis helps to find the two main different content management models, content management adaptation and content management localization model. Each of these models, have been analyzed through a SWOT model in order to identify their particularities and which of them is the best option according to particular organizational objectives. In the empirical part, this thesis has measured the organizational website performance comparing two main data. On one hand, the international website is analyzed in order to identify the results of the content management standardization. On the other hand, content management adaptation, also called content management localization model, is analyzed by looking through the key measure of the Dutch page from the same organization. The resulted output is a process model for localization as well as recommendations on how to proceed when creating a digital content management strategy. However, more research is recommended to provide more comprehensive managerial solutions.
Resumo:
The purpose of this chapter is to provide an elementary introduction to the non-renewable resource model with multiple demand curves. The theoretical literature following Hotelling (1931) assumed that all energy needs are satisfied by one type of resource (e.g. ‘oil’), extractible at different per-unit costs. This formulation implicitly assumes that all users are the same distance from each resource pool, that all users are subject to the same regulations, and that motorist users can switch as easily from liquid fossil fuels to coal as electric utilities can. These assumptions imply, as Herfindahl (1967) showed, that in competitive equilibrium all users will exhaust a lower cost resource completely before beginning to extract a higher cost resource: simultaneous extraction of different grades of oil or of oil and coal should never occur. In trying to apply the single-demand curve model during the last twenty years, several teams of authors have independently found a need to generalize it to account for users differing in their (1) location, (2) regulatory environment, or (3) resource needs. Each research team found that Herfindahl's strong, unrealistic conclusion disappears in the generalized model; in its place, a weaker Herfindahl result emerges. Since each research team focussed on a different application, however, it has not always been clear that everyone has been describing the same generalized model. Our goal is to integrate the findings of these teams and to exposit the generalized model in a form which is easily accessible.
Resumo:
We describe a method for modeling object classes (such as faces) using 2D example images and an algorithm for matching a model to a novel image. The object class models are "learned'' from example images that we call prototypes. In addition to the images, the pixelwise correspondences between a reference prototype and each of the other prototypes must also be provided. Thus a model consists of a linear combination of prototypical shapes and textures. A stochastic gradient descent algorithm is used to match a model to a novel image by minimizing the error between the model and the novel image. Example models are shown as well as example matches to novel images. The robustness of the matching algorithm is also evaluated. The technique can be used for a number of applications including the computation of correspondence between novel images of a certain known class, object recognition, image synthesis and image compression.
Resumo:
Plain Text - ASCII, Unicode, UTF-8 Content Formats - XML-based formats (RSS, MathML, SVG, Office) + PDF Text based data formats: CSV, JSON
Resumo:
This document is a review of the content of the A-level Chemistry specifications from the main UK exam boards (Scottish highers not included - sorry!). These A-level specifications commenced teaching in September 2008. Students entering university in 2010 will have studied the new A-levels, and this document is intended to help academics to identify what students will have covered. The document also contains a summary of discussions which took place between teachers and academics at our annual Post-16 teachers' day in June 2010 regarding the nature of the 2010 intake and their capabilities in chemistry. Please inform us of any errors or typos that you spot and we'll update the document. LAST UPDATE at 13:15 on Aug 27th 2010.
Resumo:
Getting content from server to client can be more complicated than we have discussed so far. This lecture discusses how caching and content delivery networks help to make the Web work.
Resumo:
ITEM DESCRIPTION After producing reviews of A-level Chemistry content in 2007 and 2010, we have updated the document to reflect the changes which have been introduced for first teaching in September 2015. We will be working with our network of teachers locally to monitor the impacts of the changes on teaching and the student experience with a view to releasing an updated version in the summer of 2017. This will aim to provide insights for university staff regarding the experiences of incoming students who will have been in the first cohort to have studied the new specifications. We are grateful to the Royal Society of Chemistry for support for the final stages of compiling this report. If you spot any errors or omissions, please don't hesitate to contact us.