885 resultados para Context data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in hardware development coupled with the rapid adoption and broad applicability of cloud computing have introduced widespread heterogeneity in data centers, significantly complicating the management of cloud applications and data center resources. This paper presents the CACTOS approach to cloud infrastructure automation and optimization, which addresses heterogeneity through a combination of in-depth analysis of application behavior with insights from commercial cloud providers. The aim of the approach is threefold: to model applications and data center resources, to simulate applications and resources for planning and operation, and to optimize application deployment and resource use in an autonomic manner. The approach is based on case studies from the areas of business analytics, enterprise applications, and scientific computing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is the first to compare random regret minimisation (RRM) and random utility maximisation (RUM) in freight transport application. This paper aims to compare RRM and RUM in a freight transport scenario involving negative shock in the reference alternative. Based on data from two stated choice experiments conducted among Swiss logistics managers, this study contributes to related literature by exploring for the first time the use of mixed logit models in the most recent version of the RRM approach. We further investigate two paradigm choices by computing elasticities and forecasting choice probability. We find that regret is important in describing the managers’ choices. Regret increases in the shock scenario, supporting the idea that a shift in reference point can cause a shift towards regret minimisation. Differences in elasticities and forecast probability are identified and discussed appropriately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatically determining and assigning shared and meaningful text labels to data extracted from an e-Commerce web page is a challenging problem. An e-Commerce web page can display a list of data records, each of which can contain a combination of data items (e.g. product name and price) and explicit labels, which describe some of these data items. Recent advances in extraction techniques have made it much easier to precisely extract individual data items and labels from a web page, however, there are two open problems: 1. assigning an explicit label to a data item, and 2. determining labels for the remaining data items. Furthermore, improvements in the availability and coverage of vocabularies, especially in the context of e-Commerce web sites, means that we now have access to a bank of relevant, meaningful and shared labels which can be assigned to extracted data items. However, there is a need for a technique which will take as input a set of extracted data items and assign automatically to them the most relevant and meaningful labels from a shared vocabulary. We observe that the Information Extraction (IE) community has developed a great number of techniques which solve problems similar to our own. In this work-in-progress paper we propose our intention to theoretically and experimentally evaluate different IE techniques to ascertain which is most suitable to solve this problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Chitons (Polyplacophora) are molluscs considered to have a simple nervous system without cephalisation. The position of the class within Mollusca is the topic of extensive debate and neuroanatomical characters can provide new sources of phylogenetic data as well as insights into the fundamental biology of the organisms. We report a new discrete anterior sensory structure in chitons, occurring throughout Lepidopleurida, the order of living chitons that retains plesiomorphic characteristics.

Results: The novel "Schwabe organ" is clearly visible on living animals as a pair of streaks of brown or purplish pigment on the roof of the pallial cavity, lateral to or partly covered by the mouth lappets. We describe the histology and ultrastructure of the anterior nervous system, including the Schwabe organ, in two lepidopleuran chitons using light and electron microscopy. The oesophageal nerve ring is greatly enlarged and displays ganglionic structure, with the neuropil surrounded by neural somata. The Schwabe organ is innervated by the lateral nerve cord, and dense bundles of nerve fibres running through the Schwabe organ epithelium are frequently surrounded by the pigment granules which characterise the organ. Basal cells projecting to the epithelial surface and cells bearing a large number of ciliary structures may be indicative of sensory function. The Schwabe organ is present in all genera within Lepidopleurida (and absent throughout Chitonida) and represents a novel anatomical synapomorphy of the clade.

Conclusions: The Schwabe organ is a pigmented sensory organ, found on the ventral surface of deep-sea and shallow water chitons; although its anatomy is well understood, its function remains unknown. The anterior commissure of the chiton oesophagial nerve ring can be considered a brain. Our thorough review of the chiton central nervous system, and particularly the sensory organs of the pallial cavity, provides a context to interpret neuroanatomical homology and assess this new sense organ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Informed by the resource-based view, this study draws on customer relationship management (CRM) and value co-creation literature to develop a framework examining the impact of social networking sites on processes to manage customer relationships. Facilitating the depth and networked interactions necessary to truly engage customers, social networking sites act as a means of enhancing customer relationships through the co-creation of value, moving CRM into a social context. Tested and validated on a data set of hotels, the main contribution of the study to service research lies in the extension of CRM processes, termed relational information processes, to include value co-creation processes due to the social capabilities afforded by social networking sites. Information technology competency and social media orientation act as critical antecedents to these processes, which have a positive impact on both financial and non-financial aspects of firm performance. The theoretical and managerial implications of these findings are discussed accordingly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. The Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) began as a public spectroscopic survey in April 2012. PESSTO classifies transients from publicly available sources and wide-field surveys, and selects science targets for detailed spectroscopic and photometric follow-up. PESSTO runs for nine months of the year, January - April and August - December inclusive, and typically has allocations of 10 nights per month. 

Aims. We describe the data reduction strategy and data products that are publicly available through the ESO archive as the Spectroscopic Survey data release 1 (SSDR1). 

Methods. PESSTO uses the New Technology Telescope with the instruments EFOSC2 and SOFI to provide optical and NIR spectroscopy and imaging. We target supernovae and optical transients brighter than 20.5<sup>m</sup> for classification. Science targets are selected for follow-up based on the PESSTO science goal of extending knowledge of the extremes of the supernova population. We use standard EFOSC2 set-ups providing spectra with resolutions of 13-18 Å between 3345-9995 Å. A subset of the brighter science targets are selected for SOFI spectroscopy with the blue and red grisms (0.935-2.53 μm and resolutions 23-33 Å) and imaging with broadband JHK<inf>s</inf> filters. 

Results. This first data release (SSDR1) contains flux calibrated spectra from the first year (April 2012-2013). A total of 221 confirmed supernovae were classified, and we released calibrated optical spectra and classifications publicly within 24 h of the data being taken (via WISeREP). The data in SSDR1 replace those released spectra. They have more reliable and quantifiable flux calibrations, correction for telluric absorption, and are made available in standard ESO Phase 3 formats. We estimate the absolute accuracy of the flux calibrations for EFOSC2 across the whole survey in SSDR1 to be typically ∼15%, although a number of spectra will have less reliable absolute flux calibration because of weather and slit losses. Acquisition images for each spectrum are available which, in principle, can allow the user to refine the absolute flux calibration. The standard NIR reduction process does not produce high accuracy absolute spectrophotometry but synthetic photometry with accompanying JHK<inf>s</inf> imaging can improve this. Whenever possible, reduced SOFI images are provided to allow this. 

Conclusions. Future data releases will focus on improving the automated flux calibration of the data products. The rapid turnaround between discovery and classification and access to reliable pipeline processed data products has allowed early science papers in the first few months of the survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the gene selection problem for microarray data with small samples and variant correlation. Most existing algorithms usually require expensive computational effort, especially under thousands of gene conditions. The main objective of this paper is to effectively select the most informative genes from microarray data, while making the computational expenses affordable. This is achieved by proposing a novel forward gene selection algorithm (FGSA). To overcome the small samples' problem, the augmented data technique is firstly employed to produce an augmented data set. Taking inspiration from other gene selection methods, the L2-norm penalty is then introduced into the recently proposed fast regression algorithm to achieve the group selection ability. Finally, by defining a proper regression context, the proposed method can be fast implemented in the software, which significantly reduces computational burden. Both computational complexity analysis and simulation results confirm the effectiveness of the proposed algorithm in comparison with other approaches

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to population ageing, Japan and Germany have to extend individuals´ working lives. However, disability increases with old-age. Workplace accommodation is a means to enable disabled individuals to remain productively employed. Drawing on qualitative interview data, this paper explores how School Authorities in these countries use workplace accommodation to support ill teachers, a white-collar profession strongly affected by (mental) ill-health. It furthermore explores how such measures influence older teachers´ career expectations and outcomes. It finds that even though the institutional contexts are similar, career options and expectations vary, though with similar (negative) outcomes for national strategies to extend working lives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing adoption of cloud computing, social networking, mobile and big data technologies provide challenges and opportunities for both research and practice. Researchers face a deluge of data generated by social network platforms which is further exacerbated by the co-mingling of social network platforms and the emerging Internet of Everything. While the topicality of big data and social media increases, there is a lack of conceptual tools in the literature to help researchers approach, structure and codify knowledge from social media big data in diverse subject matter domains, many of whom are from nontechnical disciplines. Researchers do not have a general-purpose scaffold to make sense of the data and the complex web of relationships between entities, social networks, social platforms and other third party databases, systems and objects. This is further complicated when spatio-temporal data is introduced. Based on practical experience of working with social media datasets and existing literature, we propose a general research framework for social media research using big data. Such a framework assists researchers in placing their contributions in an overall context, focusing their research efforts and building the body of knowledge in a given discipline area using social media data in a consistent and coherent manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chromatin immunoprecipitation (ChIP), when paired with sequencing or arrays, has become a method of choice for the unbiased identification of genomic-binding sites for transcription factors and epigenetic marks in various model systems. The data generated is often then interpreted by groups seeking to link these binding sites to the expression of adjacent or distal genes, and more broadly to the evolution of species, cell fate/differentiation or even cancer development. Against this backdrop is an ongoing debate over the relative importance DNA sequence versus chromatin structure and modification in the regulation of gene expression (Anon. 2008a Nature 454: 795; Anon. 2008b Nature 454: 711-715; Henikoff et al. 2008 Science 322: 853; Madhani et al. 2008 Science 322: 43-44). Rationally there is a synergy between the two and the goal of a biologist is to characterise both comprehensively enough to explain a cellular phenotype or a developmental process. If this is truly our goal then the critical factor in good science is an awareness of the constraints and potential of the biological models used. The reality however is often that this discussion is polarised by funding imperatives and the need to align to a transcription factor or epigenetic camp. This article will discuss the extrapolations involved in using ChIP data to draw conclusions about these themes and the discoveries that have resulted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large construction projects create numerous hazards, making it one of the most dangerous industries in which to work. This element of risk increases in urban areas and can have a negative impact on the external stakeholders associated with the project, along with their surrounding environments. The aim of this paper is to identify and document, in an urban context, the numerous issues encountered by on-site project managers from external stakeholders and how they affect a construction project. In addressing this aim, the core objective is to identify what issues are involved in the management of these stakeholders. In order to meet this requirement, a qualitative methodology encompassing an informative literature review followed by five individual case study interviews. The data gathered is assessed qualitatively using mind mapping software. A number of issues are identified which have an impact on the external stakeholders involved, and also how they affected proceedings on site. Collectively the most commonly occurring issues are environmental, legal, health and safety and communication issues. These ranged from road closures and traffic disruption to noise, dust and vibrations from site works. It is anticipated that the results of this study will assist and aid project managers in identifying issues considering external stakeholders, particularly on urban construction projects. A wide range of issues can develop depending on the complexity and nature of each project, but this research will illustrate and reinforce to project managers, that identifying issues early, effective communication and appropriate liaising can be used to manage the issues considering external stakeholders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital image analysis is at a crossroads. While the technology has made great strides over the past few decades, there is an urgent need for image analysis to inform the next wave of large scale tissue biomarker discovery studies in cancer. Drawing parallels from the growth of next generation sequencing, this presentation will consider the case for a common language or standard format for storing and communicating digital image analysis data. In this context, image analysis data comprises more than simply an image with markups and attached key-value pair metrics. The desire to objectively benchmark competing platforms or a push for data to be deposited to public repositories much like genomics data may drive the need for a standard that also encompasses granular, cell-by-cell data.