77 resultados para Software frameworks
Resumo:
Metabolic stable isotope labeling is increasingly employed for accurate protein (and metabolite) quantitation using mass spectrometry (MS). It provides sample-specific isotopologues that can be used to facilitate comparative analysis of two or more samples. Stable Isotope Labeling by Amino acids in Cell culture (SILAC) has been used for almost a decade in proteomic research and analytical software solutions have been established that provide an easy and integrated workflow for elucidating sample abundance ratios for most MS data formats. While SILAC is a discrete labeling method using specific amino acids, global metabolic stable isotope labeling using isotopes such as (15)N labels the entire element content of the sample, i.e. for (15)N the entire peptide backbone in addition to all nitrogen-containing side chains. Although global metabolic labeling can deliver advantages with regard to isotope incorporation and costs, the requirements for data analysis are more demanding because, for instance for polypeptides, the mass difference introduced by the label depends on the amino acid composition. Consequently, there has been less progress on the automation of the data processing and mining steps for this type of protein quantitation. Here, we present a new integrated software solution for the quantitative analysis of protein expression in differential samples and show the benefits of high-resolution MS data in quantitative proteomic analyses.
Resumo:
Much consideration is rightly given to the design of metadata models to describe data. At the other end of the data-delivery spectrum much thought has also been given to the design of geospatial delivery interfaces such as the Open Geospatial Consortium standards, Web Coverage Service (WCS), Web Map Server and Web Feature Service (WFS). Our recent experience with the Climate Science Modelling Language shows that an implementation gap exists where many challenges remain unsolved. To bridge this gap requires transposing information and data from one world view of geospatial climate data to another. Some of the issues include: the loss of information in mapping to a common information model, the need to create ‘views’ onto file-based storage, and the need to map onto an appropriate delivery interface (as with the choice between WFS and WCS for feature types with coverage-valued properties). Here we summarise the approaches we have taken in facing up to these problems.
Resumo:
A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
Resumo:
Despite the increasing use of groupware technologies in education, there is little evidence of their impact, especially within an enquiry-based learning (EBL) context. In this paper, we examine the use of a commercial standard Group Intelligence software called GroupSystems®ThinkTank. To date, ThinkTank has been adopted mainly in the USA and supports teams in generating ideas, categorising, prioritising, voting and multi-criteria decision-making and automatically generates a report at the end of each session. The software was used by students carrying out an EBL project, set by employers, for a full academic year. The criteria for assessing the impact of ThinkTank on student learning were those of creativity, participation, productivity, engagement and understanding. Data was collected throughout the year using a combination of interviews and questionnaires, and written feedback from employers. The overall findings show an increase in levels of productivity and creativity, evidence of a deeper understanding of their work but some variation in attitudes towards participation in the early stages of the project.
Resumo:
This paper reviews recent research and other literature concerning the planning and development of redundant defence estate. It concentrates on UK sources but includes reference to material from Europe and the North America were it is relevant for comparative purposes. It introduces the topic by providing a brief review of the recent restructuring of the UK defence estate and then proceeds to examine the various planning policy issues generated by this process; the policy frameworks used to guide it; comparable approaches to surplus land disposal and the appraisal of impacts; and ending the main body of the review with an analyse of the economic, social and environmental impacts of military base closure and redevelopment. It concludes that there is a significant body of work focusing on the reuse and redevelopment of redundant defence estate in the UK and abroad, but that much of this work is based on limited research or on personal experience. One particular weakness of the current literature is that it does not fully reflect the institutional difficulties posed by the disposal process and the day-to-day pressures which MOD personnel have to deal with. In doing this, it also under-emphasises the embedded cultures of individuals and professional groups who are required to operationalise the policies, procedures and practices for planning and redeveloping redundant defence estate.
Resumo:
I argue that the initial set of firm-specific assets (FSAs) act as an envelope for the early stages of internationalization of multinational enterprises (MNEs) (of whatever nationality) AND THAT there is a threshold LEVEL of FSAs that IT must possess for such international expansion to be SUCCESSFUL. I also argue that the initial FSAs of an MNE tend to be constrained by the location-specific (L) assets of the home country. However, beyond different initial conditions, there are few obvious reasons to insist that INFANT developing country MNEs are of unique character THAN ADVANCED ECONOMY MNEs, and I predict that as they evolve, the observable differences between the two groups will diminish. Successful firms will increasingly explore internationalization, but there is also no reason to believe that this is likely to happen disproportionately from the developing countries.
Resumo:
In order to gain knowledge from large databases, scalable data mining technologies are needed. Data are captured on a large scale and thus databases are increasing at a fast pace. This leads to the utilisation of parallel computing technologies in order to cope with large amounts of data. In the area of classification rule induction, parallelisation of classification rules has focused on the divide and conquer approach, also known as the Top Down Induction of Decision Trees (TDIDT). An alternative approach to classification rule induction is separate and conquer which has only recently been in the focus of parallelisation. This work introduces and evaluates empirically a framework for the parallel induction of classification rules, generated by members of the Prism family of algorithms. All members of the Prism family of algorithms follow the separate and conquer approach.
Resumo:
This paper reviews the treatment of intellectual property rights in the North American Free Trade Agreement (NAFTA) and considers the welfare-theoretic bases for innovation transfer between member and nonmember states. Specifically, we consider the effects of new technology development from within the union and question whether it is efficient (in a welfare sense) to transfer that new technology to nonmember states. When the new technology contains stochastic components, the important issue of information exchange arises and we consider this question in a simple oligopoly model with Bayesian updating. In this context, it is natural to ask the optimal price at which such information should be transferred. Some simple, natural conjugate examples are used to motivate the key parameters upon which the answer is dependent