896 resultados para computation- and data-intensive applications
Resumo:
This paper analyses the relationship between production subsidies and firms’ export performance using a very comprehensive and recent firm-level database and controlling for the endogeneity of subsidies. It documents robust evidence that production subsidies stimulate export activity at the intensive margin, although this effect is conditional on firm characteristics. In particular, the positive relationship between subsidies and the intensive margin of exports is strongest among profit-making firms, firms in capital-intensive industries, and those located in non-coastal regions. Compared to firm characteristics, the extent of heterogeneity across ownership structure (SOEs, collectives, and privately owned firms) proves to be relatively less important.
Resumo:
We introduce a flexible visual data mining framework which combines advanced projection algorithms from the machine learning domain and visual techniques developed in the information visualization domain. The advantage of such an interface is that the user is directly involved in the data mining process. We integrate principled projection algorithms, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates and billboarding, to provide a visual data mining framework. Results on a real-life chemoinformatics dataset using GTM are promising and have been analytically compared with the results from the traditional projection methods. It is also shown that the HGTM algorithm provides additional value for large datasets. The computational complexity of these algorithms is discussed to demonstrate their suitability for the visual data mining framework. Copyright 2006 ACM.
Resumo:
Genetic experiments over the last few decades have identified many regulatory proteins critical for DNA transcription. The dynamics of their transcriptional activities shape the differential expression of the genes they control. Here we describe a simple method, based on the secreted luciferase, to measure the activities of two transcription factors NF?B and HIF. This technique can effectively monitor dynamics of transcriptional events in a population of cells and be up-scaled for high-throughput screening and promoter analysis, making it ideal for data-demanding applications such as mathematical modelling.
Resumo:
In previous Statnotes, many of the statistical tests described rely on the assumption that the data are a random sample from a normal or Gaussian distribution. These include most of the tests in common usage such as the ‘t’ test ), the various types of analysis of variance (ANOVA), and Pearson’s correlation coefficient (‘r’) . In microbiology research, however, not all variables can be assumed to follow a normal distribution. Yeast populations, for example, are a notable feature of freshwater habitats, representatives of over 100 genera having been recorded . Most common are the ‘red yeasts’ such as Rhodotorula, Rhodosporidium, and Sporobolomyces and ‘black yeasts’ such as Aurobasidium pelculans, together with species of Candida. Despite the abundance of genera and species, the overall density of an individual species in freshwater is likely to be low and hence, samples taken from such a population will contain very low numbers of cells. A rare organism living in an aquatic environment may be distributed more or less at random in a volume of water and therefore, samples taken from such an environment may result in counts which are more likely to be distributed according to the Poisson than the normal distribution. The Poisson distribution was named after the French mathematician Siméon Poisson (1781-1840) and has many applications in biology, especially in describing rare or randomly distributed events, e.g., the number of mutations in a given sequence of DNA after exposure to a fixed amount of radiation or the number of cells infected by a virus given a fixed level of exposure. This Statnote describes how to fit the Poisson distribution to counts of yeast cells in samples taken from a freshwater lake.
Resumo:
Purpose - The main aim of the research is to shed light on the role of information and communication technology (ICT) in the logistics innovation process of small and medium-sized third party logistics providers (3PLs). Design/methodology/approach - A triangulated research strategy was designed using a combination of quantitative and qualitative methods. The former involved the use of a questionnaire survey of small and medium-sized Italian 3PLs with 153 usable responses received. The latter comprised a series of focus groups and the use of seven case studies. Findings - There is a relatively low level of ICT expenditure with few companies adopting formal technology investment strategies. The findings highlight the strategic importance of supply chain integration for 3PLs with companies that have embarked on an expansion of their service portfolios showing a higher level of both ICT usage and information integration. Lack of technology skills in the workforce is a major constraint on ICT adoption. Given the proliferation of logistics-related ICT tools and applications in recent years it has been difficult for small and medium-sized 3PLs to select appropriate applications. Research limitations/implications - The paper provides practical guidelines to researchers in the effective use of mixed-methods research based on the concept of methodological triangulation. In particular, it shows how questionnaire surveys, focus groups and case study analysis can be used in combination to provide insights into multi-faceted supply chain phenomena. It also identifies several potentially fruitful avenues for future research in this specific field. Practical implications - The paper's findings provide useful guidance for practitioners on the effective adoption of ICT as part of the logistics innovation process. The findings also provide support for ICT vendors in the design of ICT solutions that are aligned to the needs of small 3PLs. Originality/value - There is currently a paucity of research into the drivers and inhibitors of ICT in the innovation processes of small and medium-sized 3PLs. This paper fills this gap by exploring the issue using a range of complementary research approaches. Copyright © 2013 Emerald Group Publishing Limited. All rights reserved.
Resumo:
The project consists of an experimental and numerical modelling study of the applications of ultra-long Raman fibre laser (URFL) based amplification techniques for high-speed multi-wavelength optical communications systems. The research is focused in telecommunications C-band 40 Gb/s transmission data rates with direct and coherent detection. The optical transmission performance of URFL based systems in terms of optical noise, gain bandwidth and gain flatness for different system configurations is evaluated. Systems with different overall span lengths, transmission fibre types and data modulation formats are investigated. Performance is compared with conventional Erbium doped fibre amplifier based system to evaluate system configurations where URFL based amplification provide performance or commercial advantages.
Resumo:
Despite being nominated as a key potential interaction technique for supporting today's mobile technology user, the widespread commercialisation of speech-based input is currently being impeded by unacceptable recognition error rates. Developing effective speech-based solutions for use in mobile contexts, given the varying extent of background noise, is challenging. The research presented in this paper is part of an ongoing investigation into how best to incorporate speechbased input within mobile data collection applications. Specifically, this paper reports on a comparison of three different commercially available microphones in terms of their efficacy to facilitate mobile, speech-based data entry. We describe, in detail, our novel evaluation design as well as the results we obtained.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.
Resumo:
This paper considers the value of innovation to large Australian firms. Specifically, we investigate how R&D and intellectual property activity influences the market value of firms, using a Tobin’s q approach. R&D data are available for the period 1994–96 and data on patent, trade mark and design applications for 1996. The findings suggest that R&D and patent activity are positively and significantly associated with market value. The results also suggest that private returns to R&D in Australia are low by international standards.
Resumo:
Computational performance increasingly depends on parallelism, and many systems rely on heterogeneous resources such as GPUs and FPGAs to accelerate computationally intensive applications. However, implementations for such heterogeneous systems are often hand-crafted and optimised to one computation scenario, and it can be challenging to maintain high performance when application parameters change. In this paper, we demonstrate that machine learning can help to dynamically choose parameters for task scheduling and load-balancing based on changing characteristics of the incoming workload. We use a financial option pricing application as a case study. We propose a simulation of processing financial tasks on a heterogeneous system with GPUs and FPGAs, and show how dynamic, on-line optimisations could improve such a system. We compare on-line and batch processing algorithms, and we also consider cases with no dynamic optimisations.
Resumo:
Dance videos are interesting and semantics-intensive. At the same time, they are the complex type of videos compared to all other types such as sports, news and movie videos. In fact, dance video is the one which is less explored by the researchers across the globe. Dance videos exhibit rich semantics such as macro features and micro features and can be classified into several types. Hence, the conceptual modeling of the expressive semantics of the dance videos is very crucial and complex. This paper presents a generic Dance Video Semantics Model (DVSM) in order to represent the semantics of the dance videos at different granularity levels, identified by the components of the accompanying song. This model incorporates both syntactic and semantic features of the videos and introduces a new entity type called, Agent, to specify the micro features of the dance videos. The instantiations of the model are expressed as graphs. The model is implemented as a tool using J2SE and JMF to annotate the macro and micro features of the dance videos. Finally examples and evaluation results are provided to depict the effectiveness of the proposed dance video model.
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006
Resumo:
* The research was supported by INTAS 00-397 and 00-626 Projects.
Resumo:
The development of new all-optical technologies for data processing and signal manipulation is a field of growing importance with a strong potential for numerous applications in diverse areas of modern science. Nonlinear phenomena occurring in optical fibres have many attractive features and great, but not yet fully explored, potential in signal processing. Here, we review recent progress on the use of fibre nonlinearities for the generation and shaping of optical pulses and on the applications of advanced pulse shapes in all-optical signal processing. Amongst other topics, we will discuss ultrahigh repetition rate pulse sources, the generation of parabolic shaped pulses in active and passive fibres, the generation of pulses with triangular temporal profiles, and coherent supercontinuum sources. The signal processing applications will span optical regeneration, linear distortion compensation, optical decision at the receiver in optical communication systems, spectral and temporal signal doubling, and frequency conversion. © Copyright 2012 Sonia Boscolo and Christophe Finot.
Resumo:
AMS Subj. Classification: H.3.7 Digital Libraries, K.6.5 Security and Protection