813 resultados para Software Development– metrics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Individuals with intellectual disability have a greater risk of developing dementia.The diagnosis of dementia relies on accurate testing of cognitive function however existing tests have limited utility in people whose intellectual disability is moderate or greater. A new test was developed and underwent preliminary testing to determine use across a wider ability spectrum. The Cognitive Baseline & Screener for People with Intellectual Disability (CBS-ID) was administered to a sample of 17 dyads (n=34) (people with intellectual disability (who completed CBS-ID) and caregivers (who provided an independent rating of function)).The CBS-ID performed well on several usability metrics across all intellectual disability level and was highly correlated with existing measures of cognitive function to which it was compared.Further research with a larger sample is needed to assess the test's ability to detect change in cognition over time & determine if it aids the process of diagnosing dementia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partial evaluation of infrastructure investments have resulted in expensive mistakes, unsatisfactory outcomes and increased uncertainties for too many stakeholders, communities and economies in both developing and developed nations. "Complex Stakeholder Perception Mapping" (CSPM), is a novel approach that can address existing limitations by inclusively framing, capturing and mapping the spectrum of insights and perceptions using extended Geographic Information Systems. Maps generated in CSPM offer presentations of flexibly combined, complex perceptions of stakeholders on multiple aspects of development. CSPM extends the applications of GIS software in non-spatial mapping and of Multi-Criteria Analysis with a multidimensional evaluation platform and augments decision science capabilities in addressing complexities. Application of CSPM can improve local and regional economic gains from infrastructure projects and aid any multi-objective and multi-stakeholder decision situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding how the brain matures in healthy individuals is critical for evaluating deviations from normal development in psychiatric and neurodevelopmental disorders. The brain's anatomical networks are profoundly re-modeled between childhood and adulthood, and diffusion tractography offers unprecedented power to reconstruct these networks and neural pathways in vivo. Here we tracked changes in structural connectivity and network efficiency in 439 right-handed individuals aged 12 to 30 (211 female/126 male adults, mean age=23.6, SD=2.19; 31 female/24 male 12 year olds, mean age=12.3, SD=0.18; and 25 female/22 male 16 year olds, mean age=16.2, SD=0.37). All participants were scanned with high angular resolution diffusion imaging (HARDI) at 4 T. After we performed whole brain tractography, 70 cortical gyral-based regions of interest were extracted from each participant's co-registered anatomical scans. The proportion of fiber connections between all pairs of cortical regions, or nodes, was found to create symmetric fiber density matrices, reflecting the structural brain network. From those 70 × 70 matrices we computed graph theory metrics characterizing structural connectivity. Several key global and nodal metrics changed across development, showing increased network integration, with some connections pruned and others strengthened. The increases and decreases in fiber density, however, were not distributed proportionally across the brain. The frontal cortex had a disproportionate number of decreases in fiber density while the temporal cortex had a disproportionate number of increases in fiber density. This large-scale analysis of the developing structural connectome offers a foundation to develop statistical criteria for aberrant brain connectivity as the human brain matures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

If the land sector is to make significant contributions to mitigating anthropogenic greenhouse gas (GHG) emissions in coming decades, it must do so while concurrently expanding production of food and fiber. In our view, mathematical modeling will be required to provide scientific guidance to meet this challenge. In order to be useful in GHG mitigation policy measures, models must simultaneously meet scientific, software engineering, and human capacity requirements. They can be used to understand GHG fluxes, to evaluate proposed GHG mitigation actions, and to predict and monitor the effects of specific actions; the latter applications require a change in mindset that has parallels with the shift from research modeling to decision support. We compare and contrast 6 agro-ecosystem models (FullCAM, DayCent, DNDC, APSIM, WNMM, and AgMod), chosen because they are used in Australian agriculture and forestry. Underlying structural similarities in the representations of carbon flows though plants and soils in these models are complemented by a diverse range of emphases and approaches to the subprocesses within the agro-ecosystem. None of these agro-ecosystem models handles all land sector GHG fluxes, and considerable model-based uncertainty exists for soil C fluxes and enteric methane emissions. The models also show diverse approaches to the initialisation of model simulations, software implementation, distribution, licensing, and software quality assurance; each of these will differentially affect their usefulness for policy-driven GHG mitigation prediction and monitoring. Specific requirements imposed on the use of models by Australian mitigation policy settings are discussed, and areas for further scientific development of agro-ecosystem models for use in GHG mitigation policy are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – There has been a tendency in sustainability science to be passive. The purpose of this paper is to introduce an alternative positive framework for a more active and direct approach to sustainable design and assessment that de-couples environmental impacts and economic growth. Design/methodology/approach – This paper deconstructs some systemic gaps that are critical to sustainability in built environment management processes and tools, and reframes negative “sustainable” decision making and assessment frameworks into their positive counterparts. In particular, it addresses the omission of ecology, design and ethics in development assessment. Findings – Development can be designed to provide ecological gains and surplus “eco-services,” but assessment tools and processes favor business-as-usual. Despite the tenacity of the dominant paradigm (DP) in sustainable development institutionalized by the Brundtland Report over 25 years ago, these omissions are easily corrected. Research limitations/implications – The limitation is that the author was unable to find exceptions to the omissions cited here in the extensive literature on urban planning and building assessment tools. However, exceptions prove the rule. The implication is that it is not too late for eco-positive retrofitting of cities to increase natural and social capital. The solutions are just as applicable in places like China and India as the USA, as they pay for themselves. Originality/value – Positive development (PD) is a fundamental paradigm shift that reverses the negative models, methods and metrics of the DP of sustainable development. This paper provides an example of how existing “negative” concepts and practices can be converted into positive ones through a PD prism. Through a new form of bio-physical design, development can be a sustainability solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The aim of this study was to develop a model capable of predicting variability in the mental workload experienced by frontline operators under routine and nonroutine conditions. Background: Excess workload is a risk that needs to be managed in safety-critical industries. Predictive models are needed to manage this risk effectively yet are difficult to develop. Much of the difficulty stems from the fact that workload prediction is a multilevel problem. Method: A multilevel workload model was developed in Study 1 with data collected from an en route air traffic management center. Dynamic density metrics were used to predict variability in workload within and between work units while controlling for variability among raters. The model was cross-validated in Studies 2 and 3 with the use of a high-fidelity simulator. Results: Reported workload generally remained within the bounds of the 90% prediction interval in Studies 2 and 3. Workload crossed the upper bound of the prediction interval only under nonroutine conditions. Qualitative analyses suggest that nonroutine events caused workload to cross the upper bound of the prediction interval because the controllers could not manage their workload strategically. Conclusion: The model performed well under both routine and nonroutine conditions and over different patterns of workload variation. Application: Workload prediction models can be used to support both strategic and tactical workload management. Strategic uses include the analysis of historical and projected workflows and the assessment of staffing needs. Tactical uses include the dynamic reallocation of resources to meet changes in demand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of innovative methods of stock assessment is a priority for State and Commonwealth fisheries agencies. It is driven by the need to facilitate sustainable exploitation of naturally occurring fisheries resources for the current and future economic, social and environmental well being of Australia. This project was initiated in this context and took advantage of considerable recent achievements in genomics that are shaping our comprehension of the DNA of humans and animals. The basic idea behind this project was that genetic estimates of effective population size, which can be made from empirical measurements of genetic drift, were equivalent to estimates of the successful number of spawners that is an important parameter in process of fisheries stock assessment. The broad objectives of this study were to 1. Critically evaluate a variety of mathematical methods of calculating effective spawner numbers (Ne) by a. conducting comprehensive computer simulations, and by b. analysis of empirical data collected from the Moreton Bay population of tiger prawns (P. esculentus). 2. Lay the groundwork for the application of the technology in the northern prawn fishery (NPF). 3. Produce software for the calculation of Ne, and to make it widely available. The project pulled together a range of mathematical models for estimating current effective population size from diverse sources. Some of them had been recently implemented with the latest statistical methods (eg. Bayesian framework Berthier, Beaumont et al. 2002), while others had lower profiles (eg. Pudovkin, Zaykin et al. 1996; Rousset and Raymond 1995). Computer code and later software with a user-friendly interface (NeEstimator) was produced to implement the methods. This was used as a basis for simulation experiments to evaluate the performance of the methods with an individual-based model of a prawn population. Following the guidelines suggested by computer simulations, the tiger prawn population in Moreton Bay (south-east Queensland) was sampled for genetic analysis with eight microsatellite loci in three successive spring spawning seasons in 2001, 2002 and 2003. As predicted by the simulations, the estimates had non-infinite upper confidence limits, which is a major achievement for the application of the method to a naturally-occurring, short generation, highly fecund invertebrate species. The genetic estimate of the number of successful spawners was around 1000 individuals in two consecutive years. This contrasts with about 500,000 prawns participating in spawning. It is not possible to distinguish successful from non-successful spawners so we suggest a high level of protection for the entire spawning population. We interpret the difference in numbers between successful and non-successful spawners as a large variation in the number of offspring per family that survive – a large number of families have no surviving offspring, while a few have a large number. We explored various ways in which Ne can be useful in fisheries management. It can be a surrogate for spawning population size, assuming the ratio between Ne and spawning population size has been previously calculated for that species. Alternatively, it can be a surrogate for recruitment, again assuming that the ratio between Ne and recruitment has been previously determined. The number of species that can be analysed in this way, however, is likely to be small because of species-specific life history requirements that need to be satisfied for accuracy. The most universal approach would be to integrate Ne with spawning stock-recruitment models, so that these models are more accurate when applied to fisheries populations. A pathway to achieve this was established in this project, which we predict will significantly improve fisheries sustainability in the future. Regardless of the success of integrating Ne into spawning stock-recruitment models, Ne could be used as a fisheries monitoring tool. Declines in spawning stock size or increases in natural or harvest mortality would be reflected by a decline in Ne. This would be good for data-poor fisheries and provides fishery independent information, however, we suggest a species-by-species approach. Some species may be too numerous or experiencing too much migration for the method to work. During the project two important theoretical studies of the simultaneous estimation of effective population size and migration were published (Vitalis and Couvet 2001b; Wang and Whitlock 2003). These methods, combined with collection of preliminary genetic data from the tiger prawn population in southern Gulf of Carpentaria population and a computer simulation study that evaluated the effect of differing reproductive strategies on genetic estimates, suggest that this technology could make an important contribution to the stock assessment process in the northern prawn fishery (NPF). Advances in the genomics world are rapid and already a cheaper, more reliable substitute for microsatellite loci in this technology is available. Digital data from single nucleotide polymorphisms (SNPs) are likely to super cede ‘analogue’ microsatellite data, making it cheaper and easier to apply the method to species with large population sizes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forage budgeting, land condition monitoring and maintaining ground cover residuals are critical management practices for the long term sustainability of the northern grazing industry. The aim of this project is to do a preliminary investigation into industry need, feasibility and willingness to adopt a simple to use hand-held hardware device and compatible, integrated software applications that can be used in the paddock by producers, to assist in land condition monitoring and forage budgeting for better Grazing Land Management and to assist with compliance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The open development model of software production has been characterized as the future model of knowledge production and distributed work. Open development model refers to publicly available source code ensured by an open source license, and the extensive and varied distributed participation of volunteers enabled by the Internet. Contemporary spokesmen of open source communities and academics view open source development as a new form of volunteer work activity characterized by hacker ethic and bazaar governance . The development of the Linux operating system is perhaps the best know example of such an open source project. It started as an effort by a user-developer and grew quickly into a large project with hundreds of user-developer as contributors. However, in hybrids , in which firms participate in open source projects oriented towards end-users, it seems that most users do not write code. The OpenOffice.org project, initiated by Sun Microsystems, in this study represents such a project. In addition, the Finnish public sector ICT decision-making concerning open source use is studied. The purpose is to explore the assumptions, theories and myths related to the open development model by analysing the discursive construction of the OpenOffice.org community: its developers, users and management. The qualitative study aims at shedding light on the dynamics and challenges of community construction and maintenance, and related power relations in hybrid open source, by asking two main research questions: How is the structure and membership constellation of the community, specifically the relation between developers and users linguistically constructed in hybrid open development? What characterizes Internet-mediated virtual communities and how can they be defined? How do they differ from hierarchical forms of knowledge production on one hand and from traditional volunteer communities on the other? The study utilizes sociological, psychological and anthropological concepts of community for understanding the connection between the real and the imaginary in so-called virtual open source communities. Intermediary methodological and analytical concepts are borrowed from discourse and rhetorical theories. A discursive-rhetorical approach is offered as a methodological toolkit for studying texts and writing in Internet communities. The empirical chapters approach the problem of community and its membership from four complementary points of views. The data comprises mailing list discussion, personal interviews, web page writings, email exchanges, field notes and other historical documents. The four viewpoints are: 1) the community as conceived by volunteers 2) the individual contributor s attachment to the project 3) public sector organizations as users of open source 4) the community as articulated by the community manager. I arrive at four conclusions concerning my empirical studies (1-4) and two general conclusions (5-6). 1) Sun Microsystems and OpenOffice.org Groupware volunteers failed in developing necessary and sufficient open code and open dialogue to ensure collaboration thus splitting the Groupware community into volunteers we and the firm them . 2) Instead of separating intrinsic and extrinsic motivations, I find that volunteers unique patterns of motivations are tied to changing objects and personal histories prior and during participation in the OpenOffice.org Lingucomponent project. Rather than seeing volunteers as a unified community, they can be better understood as independent entrepreneurs in search of a collaborative community . The boundaries between work and hobby are blurred and shifting, thus questioning the usefulness of the concept of volunteer . 3) The public sector ICT discourse portrays a dilemma and tension between the freedom to choose, use and develop one s desktop in the spirit of open source on one hand and the striving for better desktop control and maintenance by IT staff and user advocates, on the other. The link between the global OpenOffice.org community and the local end-user practices are weak and mediated by the problematic IT staff-(end)user relationship. 4) Authoring community can be seen as a new hybrid open source community-type of managerial practice. The ambiguous concept of community is a powerful strategic tool for orienting towards multiple real and imaginary audiences as evidenced in the global membership rhetoric. 5) The changing and contradictory discourses of this study show a change in the conceptual system and developer-user relationship of the open development model. This change is characterized as a movement from hacker ethic and bazaar governance to more professionally and strategically regulated community. 6) Community is simultaneously real and imagined, and can be characterized as a runaway community . Discursive-action can be seen as a specific type of online open source engagement. Hierarchies and structures are created through discursive acts. Key words: Open Source Software, open development model, community, motivation, discourse, rhetoric, developer, user, end-user

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental task in bioinformatics involves a transfer of knowledge from one protein molecule onto another by way of recognizing similarities. Such similarities are obtained at different levels, that of sequence, whole fold, or important substructures. Comparison of binding sites is important to understand functional similarities among the proteins and also to understand drug cross-reactivities. Current methods in literature have their own merits and demerits, warranting exploration of newer concepts and algorithms, especially for large-scale comparisons and for obtaining accurate residue-wise mappings. Here, we report the development of a new algorithm, PocketAlign, for obtaining structural superpositions of binding sites. The software is available as a web-service at http://proline.physicslisc.emetin/pocketalign/. The algorithm encodes shape descriptors in the form of geometric perspectives, supplemented by chemical group classification. The shape descriptor considers several perspectives with each residue as the focus and captures relative distribution of residues around it in a given site. Residue-wise pairings are computed by comparing the set of perspectives of the first site with that of the second, followed by a greedy approach that incrementally combines residue pairings into a mapping. The mappings in different frames are then evaluated by different metrics encoding the extent of alignment of individual geometric perspectives. Different initial seed alignments are computed, each subsequently extended by detecting consequential atomic alignments in a three-dimensional grid, and the best 500 stored in a database. Alignments are then ranked, and the top scoring alignments reported, which are then streamed into Pymol for visualization and analyses. The method is validated for accuracy and sensitivity and benchmarked against existing methods. An advantage of PocketAlign, as compared to some of the existing tools available for binding site comparison in literature, is that it explores different schemes for identifying an alignment thus has a better potential to capture similarities in ligand recognition abilities. PocketAlign, by finding a detailed alignment of a pair of sites, provides insights as to why two sites are similar and which set of residues and atoms contribute to the similarity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precision, sophistication and economic factors in many areas of scientific research that demand very high magnitude of compute power is the order of the day. Thus advance research in the area of high performance computing is getting inevitable. The basic principle of sharing and collaborative work by geographically separated computers is known by several names such as metacomputing, scalable computing, cluster computing, internet computing and this has today metamorphosed into a new term known as grid computing. This paper gives an overview of grid computing and compares various grid architectures. We show the role that patterns can play in architecting complex systems, and provide a very pragmatic reference to a set of well-engineered patterns that the practicing developer can apply to crafting his or her own specific applications. We are not aware of pattern-oriented approach being applied to develop and deploy a grid. There are many grid frameworks that are built or are in the process of being functional. All these grids differ in some functionality or the other, though the basic principle over which the grids are built is the same. Despite this there are no standard requirements listed for building a grid. The grid being a very complex system, it is mandatory to have a standard Software Architecture Specification (SAS). We attempt to develop the same for use by any grid user or developer. Specifically, we analyze the grid using an object oriented approach and presenting the architecture using UML. This paper will propose the usage of patterns at all levels (analysis. design and architectural) of the grid development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As academic libraries are increasingly supported by a matrix of databases functions, the use of data mining and visualization techniques offer significant potential for future collection development and service initiatives based on quantifiable data. While data collection techniques are still not standardized and results may be skewed because of granularity problems, faulty algorithms, and a host of other factors, useful baseline data is extractable and broad trends can be identified. The purpose of the current study is to provide an initial assessment of data associated with science monograph collection at the Marston Science Library (MSL), University of Florida. These sciences fall within the major Library of Congress Classification schedules of Q, S, and T, excluding R, TN, TR, and TT. Overall strategy of this project is to look at the potential science audiences within the university community and analyze data related to purchasing and circulation patterns, e-book usage, and interlibrary loan statistics. While a longitudinal study from 2004 to the present would be ideal, this paper presents the results from the academic year July 1, 2008 to June 30, 2009 which was chosen as the pilot period because all data reservoirs identified above were available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Designers who want to manufacture a hardenable steel component need to select both the steel and its heat treatment. This project aims to develop a selection methodology for steels and process routes as an aid to designers. Three studies were conducted: - production of software to calculate the "equivalent diameter" and "equivalent Jominy distance" for simple shapes of a steel component; - prediction of semi-empirical Jominy curves (as-cooled) using CCT diagrams and process modelling methods, which were validated by experiment on plain carbon steels; - investigation of tempering of Jominy bars to explore the potential for semi-empirical models for the hardness after tempering.