958 resultados para communication feature property


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A modified linear prediction (MLP) method is proposed in which the reference sensor is optimally located on the extended line of the array. The criterion of optimality is the minimization of the prediction error power, where the prediction error is defined as the difference between the reference sensor and the weighted array outputs. It is shown that the L2-norm of the least-squares array weights attains a minimum value for the optimum spacing of the reference sensor, subject to some soft constraint on signal-to-noise ratio (SNR). How this minimum norm property can be used for finding the optimum spacing of the reference sensor is described. The performance of the MLP method is studied and compared with that of the linear prediction (LP) method using resolution, detection bias, and variance as the performance measures. The study reveals that the MLP method performs much better than the LP technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While plants of a single species emit a diversity of volatile organic compounds (VOCs) to attract or repel interacting organisms, these specific messages may be lost in the midst of the hundreds of VOCs produced by sympatric plants of different species, many of which may have no signal content. Receivers must be able to reduce the babel or noise in these VOCs in order to correctly identify the message. For chemical ecologists faced with vast amounts of data on volatile signatures of plants in different ecological contexts, it is imperative to employ accurate methods of classifying messages, so that suitable bioassays may then be designed to understand message content. We demonstrate the utility of `Random Forests' (RF), a machine-learning algorithm, for the task of classifying volatile signatures and choosing the minimum set of volatiles for accurate discrimination, using datam from sympatric Ficus species as a case study. We demonstrate the advantages of RF over conventional classification methods such as principal component analysis (PCA), as well as data-mining algorithms such as support vector machines (SVM), diagonal linear discriminant analysis (DLDA) and k-nearest neighbour (KNN) analysis. We show why a tree-building method such as RF, which is increasingly being used by the bioinformatics, food technology and medical community, is particularly advantageous for the study of plant communication using volatiles, dealing, as it must, with abundant noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The title of the 14th International Conference on Electronic Publishing (ELPUB), “Publishing in the networked world: Transforming the nature of communication”, is a timely one. Scholarly communication and scientific publishing has recently been undergoing subtle changes. Published papers are no longer fixed physical objects, as they once were. The “convergence” of information, communication, publishing and web technologies along with the emergence of Web 2.0 and social networks has completely transformed scholarly communication and scientific papers turned to living and changing entities in the online world. The themes (electronic publishing and social networks; scholarly publishing models; and technological convergence) selected for the conference are meant to address the issues involved in this transformation process. We are pleased to present the proceedings book with more than 30 papers and short communications addressing these issues. What you hold in your hands is a by-product and the culmination of almost a Year long work of many people including conference organizers, authors, reviewers, editors and print and online publishers. The ELPUB 2010 conference was organized and hosted by the Hanken School of Economics in Helsinki, Finland. Professors Turid Hedlund of Hanken School of Economics and Yaşar Tonta of Hacettepe University Department of Information Management (Ankara, Turkey) served as General Chair and Program Chair, respectively. We received more than 50 submissions from several countries. All submissions were peer-reviewed by members of an international Program Committee whose contributions proved most valuable and appreciated. The 14th ELPUB conference carries on the tradition of previous conferences held in the United Kingdom (1997 and 2001), Hungary (1998), Sweden (1999), Russia (2000), the Czech Republic (2002), Portugal (2003), Brazil (2004), Belgium (2005), Bulgaria (2006), Austria (2007), Canada (2008) and Italy (2009). The ELPUB Digital Library, http://elpub.scix.net serves as archive for the papers presented at the ELPUB conferences through the years. The 15th ELPUB conference will be organized by the Department of Information Management of Hacettepe University and will take place in Ankara, Turkey, from 14-16 June 2011. (Details can be found at the ELPUB web site as the conference date nears by.) We thank Marcus Sandberg and Hannu Sääskilahti for copyediting, Library Director Tua Hindersson – Söderholm for accepting to publish the online as well as the print version of the proceedings. Thanks also to Patrik Welling for maintaining the conference web site and Tanja Dahlgren for administrative support. We warmly acknowledge the support in organizing the conference to colleagues at Hanken School of Economics and our sponsors.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have developed a novel nanoparticle tracking based interface microrheology technique to perform in situ studies on confined complex fluids. To demonstrate the power of this technique, we show, for the first time, how in situ glass formation in polymers confined at air-water interface can be directly probed by monitoring variation of the mean square displacement of embedded nanoparticles as a function of surface density. We have further quantified the appearance of dynamic heterogeneity and hence vitrification in polymethyl methacrylate monolayers above a certain surface density, through the variation of non-Gaussian parameter of the probes. (C) 2010 American Institute of Physics. [doi:10.1063/1.3471584].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inverse filters are conventionally used for resolving overlapping signals of identical waveshape. However, the inverse filtering approach is shown to be useful for resolving overlapping signals, identical or otherwise, of unknown waveshapes. Digital inverse filter design based on autocorrelation formulation of linear prediction is known to perform optimum spectral flattening of the input signal for which the filter is designed. This property of the inverse filter is used to accomplish composite signal decomposition. The theory has been presented assuming constituent signals to be responses of all-pole filters. However, the approach may be used for a general situation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES. Oral foreign language skills are an integral part of one's social, academic and professional competence. This can be problematic for those suffering from foreign language communication apprehension (CA), or a fear of speaking a foreign language. CA manifests itself, for example, through feelings of anxiety and tension, physical arousal and avoidance of foreign language communication situations. According to scholars, foreign language CA may impede the language learning process significantly and have detrimental effects on one's language learning, academic achievement and career prospects. Drawing on upper secondary students' subjective experiences of communication situations in English as a foreign language, this study seeks, first, to describe, analyze and interpret why upper secondary students experience English language communication apprehension in English as a foreign language (EFL) classes. Second, this study seeks to analyse what the most anxiety-arousing oral production tasks in EFL classes are, and which features of different oral production tasks arouse English language communication apprehension and why. The ultimate objectives of the present study are to raise teachers' awareness of foreign language CA and its features, manifestations and impacts in foreign language classes as well as to suggest possible ways to minimize the anxiety-arousing features in foreign language classes. METHODS. The data was collected in two phases by means of six-part Likert-type questionnaires and theme interviews, and analysed using both quantitative and qualitative methods. The questionnaire data was collected in spring 2008. The respondents were 122 first-year upper secondary students, 68 % of whom were girls and 31 % of whom were boys. The data was analysed by statistical methods using SPSS software. The theme interviews were conducted in spring 2009. The interviewees were 11 second-year upper secondary students aged 17 to 19, who were chosen by purposeful selection on the basis of their English language CA level measured in the questionnaires. Six interviewees were classified as high apprehensives and five as low apprehensives according to their score in the foreign language CA scale in the questionnaires. The interview data was coded and thematized using the technique of content analysis. The analysis and interpretation of the data drew on a comparison of the self-reports of the highly apprehensive and low apprehensive upper secondary students. RESULTS. The causes of English language CA in EFL classes as reported by the students were both internal and external in nature. The most notable causes were a low self-assessed English proficiency, a concern over errors, a concern over evaluation, and a concern over the impression made on others. Other causes related to a high English language CA were a lack of authentic oral practise in EFL classes, discouraging teachers and negative experiences of learning English, unrealistic internal demands for oral English performance, high external demands and expectations for oral English performance, the conversation partner's higher English proficiency, and the audience's large size and unfamiliarity. The most anxiety-arousing oral production tasks in EFL classes were presentations or speeches with or without notes in front of the class, acting in front of the class, pair debates with the class as audience, expressing thoughts and ideas to the class, presentations or speeches without notes while seated, group debates with the class as audience, and answering to the teacher's questions involuntarily. The main features affecting the anxiety-arousing potential of an oral production task were a high degree of attention, a large audience, a high degree of evaluation, little time for preparation, little linguistic support, and a long duration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a storage system where individual storage nodes are prone to failure, the redundant storage of data in a distributed manner across multiple nodes is a must to ensure reliability. Reed-Solomon codes possess the reconstruction property under which the stored data can be recovered by connecting to any k of the n nodes in the network across which data is dispersed. This property can be shown to lead to vastly improved network reliability over simple replication schemes. Also of interest in such storage systems is the minimization of the repair bandwidth, i.e., the amount of data needed to be downloaded from the network in order to repair a single failed node. Reed-Solomon codes perform poorly here as they require the entire data to be downloaded. Regenerating codes are a new class of codes which minimize the repair bandwidth while retaining the reconstruction property. This paper provides an overview of regenerating codes including a discussion on the explicit construction of optimum codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The new paradigm of connectedness and empowerment brought by the interactivity feature of the Web 2.0 has been challenging the traditional centralized performance of mainstream media. The corporation has been able to survive the strong winds by transforming itself into a global multimedia business network embedded in the network society. By establishing networks, e.g. networks of production and distribution, the global multimedia business network has been able to sight potential solutions by opening the doors to innovation in a decentralized and flexible manner. Under this emerging context of re-organization, traditional practices like sourcing need to be re- explained and that is precisely what this thesis attempts to tackle. Based on ICT and on the network society, the study seeks to explain within the Finnish context the particular case of Helsingin Sanomat (HS) and its relations with the youth news agency, Youth Voice Editorial Board (NÄT). In that sense, the study can be regarded as an explanatory embedded single case study, where HS is the principal unit of analysis and NÄT its embedded unit of analysis. The thesis was able to reach explanations through interrelated steps. First, it determined the role of ICT in HS’s sourcing practices. Then it mapped an overview of the HS’s sourcing relations and provided a context in which NÄT was located. And finally, it established conceptualized institutional relational data between HS and NÄT for their posterior measurement through social network analysis. The data set was collected via qualitative interviews addressed to online and offline editors of HS as well as interviews addressed to NÄT’s personnel. The study concluded that ICT’s interactivity and User Generated Content (UGC) are not sourcing tools as such but mechanism used by HS for getting ideas that could turn into potential news stories. However, when it comes to visual communication, some exemptions were found. The lack of official sources amidst the immediacy leads HS to rely on ICT’s interaction and UGC. More than meets the eye, ICT’s input into the sourcing practice may be more noticeable if the interaction and UGC is well organized and coordinated into proper and innovative networks of alternative content collaboration. Currently, HS performs this sourcing practice via two projects that differ, precisely, by the mode they are coordinated. The first project found, Omakaupunki, is coordinated internally by Sanoma Group’s owned media houses HS, Vartti and Metro. The second project found is coordinated externally. The external alternative sourcing network, as it was labeled, consists of three actors, namely HS, NÄT (professionals in charge) and the youth. This network is a balanced and complete triad in which the actors connect themselves in relations of feedback, recognition, creativity and filtering. However, as innovation is approached very reluctantly, this content collaboration is a laboratory of experiments; a ‘COLLABORATORY’.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A common and practical paradigm in cooperative communication systems is the use of a dynamically selected `best' relay to decode and forward information from a source to a destination. Such systems use two phases - a relay selection phase, in which the system uses transmission time and energy to select the best relay, and a data transmission phase, in which it uses the spatial diversity benefits of selection to transmit data. In this paper, we derive closed-form expressions for the overall throughput and energy consumption, and study the time and energy trade-off between the selection and data transmission phases. To this end, we analyze a baseline non-adaptive system and several adaptive systems that adapt the selection phase, relay transmission power, or transmission time. Our results show that while selection yields significant benefits, the selection phase's time and energy overhead can be significant. In fact, at the optimal point, the selection can be far from perfect, and depends on the number of relays and the mode of adaptation. The results also provide guidelines about the optimal system operating point for different modes of adaptation. The analysis also sheds new insights on the fast splitting-based algorithm considered in this paper for relay selection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Embryonic stem cells offer potentially a ground-breaking insight into health and diseases and are said to offer hope in discovering cures for many ailments unimaginable few years ago. Human embryonic stem cells are undifferentiated, immature cells that possess an amazing ability to develop into almost any body cell such as heart muscle, bone, nerve and blood cells and possibly even organs in due course. This remarkable feature, enabling embryonic stem cells to proliferate indefinitely in vitro (in a test tube), has branded them as a so-called miracle cure . Their potential use in clinical applications provides hope to many sufferers of debilitating and fatal medical conditions. However, the emergence of stem cell research has resulted in intense debates about its promises and dangers. On the one hand, advocates hail its potential, ranging from alleviating and even curing fatal and debilitating diseases such as Parkinson s, diabetes, heart ailments and so forth. On the other hand, opponents decry its dangers, drawing attention to the inherent risks of human embryo destruction, cloning for research purposes and reproductive cloning eventually. Lately, however, the policy battles surrounding human embryonic stem cell innovation have shifted from being a controversial research to scuffles within intellectual property rights. In fact, the ability to obtain patents represents a pivotal factor in the economic success or failure of this new biotechnology. Although, stem cell patents tend to more or less satisfy the standard patentability requirements, they also raise serious ethical and moral questions about the meaning of the exclusions on ethical or moral grounds as found in European and to an extent American and Australian patent laws. At present there is a sort of a calamity over human embryonic stem cell patents in Europe and to an extent in Australia and the United States. This in turn has created a sense of urgency to engage all relevant parties in the discourse on how best to approach patenting of this new form of scientific innovation. In essence, this should become a highly favoured patenting priority. To the contrary, stem cell innovation and its reliance on patent protection risk turmoil, uncertainty, confusion and even a halt on not only stem cell research but also further emerging biotechnology research and development. The patent system is premised upon the fundamental principle of balance which ought to ensure that the temporary monopoly awarded to the inventor equals that of the social benefit provided by the disclosure of the invention. Ensuring and maintaining this balance within the patent system when patenting human embryonic stem cells is of crucial contemporary relevance. Yet, the patenting of human embryonic stem cells raises some fundamental moral, social and legal questions. Overall, the present approach of patenting human embryonic stem cell related inventions is unsatisfactory and ineffective. This draws attention to a specific question which provides for a conceptual framework for this work. That question is the following: how can the investigated patent offices successfully deal with patentability of human embryonic stem cells? This in turn points at the thorny issue of application of the morality clause in this field. In particular, the interpretation of the exclusions on ethical or moral grounds as found in Australian, American and European legislative and judicial precedents. The Thesis seeks to compare laws and legal practices surrounding patentability of human embryonic stem cells in Australia and the United States with that of Europe. By using Europe as the primary case study for lessons and guidance, the central goal of the Thesis then becomes the determination of the type of solutions available to Europe with prospects to apply such to Australia and the United States. The Dissertation purports to define the ethical implications that arise with patenting human embryonic stem cells and intends to offer resolutions to the key ethical dilemmas surrounding patentability of human embryonic stem cells and other morally controversial biotechnology inventions. In particular, the Thesis goal is to propose a functional framework that may be used as a benchmark for an informed discussion on the solution to resolving ethical and legal tensions that come with patentability of human embryonic stem cells in Australian, American and European patent worlds. Key research questions that arise from these objectives and which continuously thread throughout the monograph are: 1. How do common law countries such as Australia and the United States approach and deal with patentability of human embryonic stem cells in their jurisdictions? These practices are then compared to the situation in Europe as represented by the United Kingdom (first two chapters), the Court of Justice of the European Union and the European Patent Office decisions (Chapter 3 onwards) in order to obtain a full picture of the present patenting procedures on the European soil. 2. How are ethical and moral considerations taken into account at patent offices investigated when assessing patentability of human embryonic stem cell related inventions? In order to assess this part, the Thesis evaluates how ethical issues that arise with patent applications are dealt with by: a) Legislative history of the modern patent system from its inception in 15th Century England to present day patent laws. b) Australian, American and European patent offices presently and in the past, including other relevant legal precedents on the subject matter. c) Normative ethical theories. d) The notion of human dignity used as the lowest common denominator for the interpretation of the European morality clause. 3. Given the existence of the morality clause in form of Article 6(1) of the Directive 98/44/EC of the European Parliament and of the Council of 6 July 1998 on the legal protection of biotechnological inventions which corresponds to Article 53(a) European Patent Convention, a special emphasis is put on Europe as a guiding principle for Australia and the United States. Any room for improvement of the European morality clause and Europe s current manner of evaluating ethical tensions surrounding human embryonic stem cell inventions is examined. 4. A summary of options (as represented by Australia, the United States and Europe) available as a basis for the optimal examination procedure of human embryonic stem cell inventions is depicted, whereas the best of such alternatives is deduced in order to create a benchmark framework. This framework is then utilised on and promoted as a tool to assist Europe (as represented by the European Patent Office) in examining human embryonic stem cell patent applications. This method suggests a possibility of implementing an institution solution. 5. Ultimately, a question of whether such reformed European patent system can be used as a founding stone for a potential patent reform in Australia and the United States when examining human embryonic stem cells or other morally controversial inventions is surveyed. The author wishes to emphasise that the guiding thought while carrying out this work is to convey the significance of identifying, analysing and clarifying the ethical tensions surrounding patenting human embryonic stem cells and ultimately present a solution that adequately assesses patentability of human embryonic stem cell inventions and related biotechnologies. In answering the key questions above, the Thesis strives to contribute to the broader stem cell debate about how and to which extent ethical and social positions should be integrated into the patenting procedure in pluralistic and morally divided democracies of Europe and subsequently Australia and the United States.