892 resultados para EXPLOITING MULTICOMMUTATION
Resumo:
How can empirical evidence of adverse effects from exposure to noxious agents, which is often incomplete and uncertain, be used most appropriately to protect human health? We examine several important questions on the best uses of empirical evidence in regulatory risk management decision-making raised by the US Environmental Protection Agency (EPA)'s science-policy concerning uncertainty and variability in human health risk assessment. In our view, the US EPA (and other agencies that have adopted similar views of risk management) can often improve decision-making by decreasing reliance on default values and assumptions, particularly when causation is uncertain. This can be achieved by more fully exploiting decision-theoretic methods and criteria that explicitly account for uncertain, possibly conflicting scientific beliefs and that can be fully studied by advocates and adversaries of a policy choice, in administrative decision-making involving risk assessment. The substitution of decision-theoretic frameworks for default assumption-driven policies also allows stakeholder attitudes toward risk to be incorporated into policy debates, so that the public and risk managers can more explicitly identify the roles of risk-aversion or other attitudes toward risk and uncertainty in policy recommendations. Decision theory provides a sound scientific way explicitly to account for new knowledge and its effects on eventual policy choices. Although these improvements can complicate regulatory analyses, simplifying default assumptions can create substantial costs to society and can prematurely cut off consideration of new scientific insights (e.g., possible beneficial health effects from exposure to sufficiently low 'hormetic' doses of some agents). In many cases, the administrative burden of applying decision-analytic methods is likely to be more than offset by improved effectiveness of regulations in achieving desired goals. Because many foreign jurisdictions adopt US EPA reasoning and methods of risk analysis, it may be especially valuable to incorporate decision-theoretic principles that transcend local differences among jurisdictions.
Resumo:
The cyclotide family of plant proteins is of interest because of their unique topology, which combines a head-to-tail cyclic backbone with an embedded cystine knot, and because their-remarkable chemical and biological properties make them ideal candidates as grafting templates for biologically active peptide epitopes. The present Study describes the first steps towards exploiting the cyclotide framework by synthesizing and structurally characterizing two grafted analogues of the cyclotide kalata B1. The modified peptides have polar or charged residues substituted for residues that form part of a surface-exposed hydrophobic patch that plays a significant role in the folding and biological activity of kalata B1. Both analogues retain the native cyclotide fold, but lack the undesired haemolytic activity of their parent molecule, kalata B1. This finding confirms the tolerance of the cyclotide framework to residue Substitutions and opens up possibilities for the Substitution of biologically active peptide epitopes into the framework.
Resumo:
Applications that exploit contextual information in order to adapt their behaviour to dynamically changing operating environments and user requirements are increasingly being explored as part of the vision of pervasive or ubiquitous computing. Despite recent advances in infrastructure to support these applications through the acquisition, interpretation and dissemination of context data from sensors, they remain prohibitively difficult to develop and have made little penetration beyond the laboratory. This situation persists largely due to a lack of appropriately high-level abstractions for describing, reasoning about and exploiting context information as a basis for adaptation. In this paper, we present our efforts to address this challenge, focusing on our novel approach involving the use of preference information as a basis for making flexible adaptation decisions. We also discuss our experiences in applying our conceptual and software frameworks for context and preference modelling to a case study involving the development of an adaptive communication application.
Resumo:
Rights talk dominates contemporary moral discourse. It is also having a growing impact on the development of legal principle and doctrine. One of the best known general arguments in support of rights-based moral theories is the one given by John Rawls, who claims that only rights-based theories take seriously the distinction between human beings; only they can be counted on to protect certain rights and interests that are so paramount that they are beyond the demands of net happiness (Rawls 1971). Charges and assertions of this nature have been extremely influential. After the Second World War, there was an immense increase in rights talk, both in the sheer volume of that talk and in the number of supposed rights being claimed. Rights doctrine has progressed a long way since its original modest aim of providing “a legitimization of … claims against tyrannical or exploiting regimes” (Benn 1978: 61). As Tom Campbell points out: The human rights movement is based on the need for a counter-ideology to combat the abuses and misuses of political authority by those who invoke, as a justification for their activities, the need to subordinate the particular interests of individuals to the general good (Campbell 1996: 13).
Resumo:
The international business literature has evolved from addressing the question of how MNCs are more efficient organizations for exploiting innovations globally – to the issue of how MNCs are more effective in creating new products, processes and technologies worldwide. However, the question of how MNCs contribute to, and share benefits with, the host country stakeholders has received little attention in the literature. Our research shows that the contribution of MNCs to the subsidiary company and country stakeholders in the form of R&D, exports and royalty earnings is significantly less than expected. Insufficient compensation to local subsidiary stakeholders may undermine the motivation of subsidiary managers to discover new sources of advantage for the MNC. It may also discourage subsidiary country governments from offering incentives to MNCs for inward FDI.
Resumo:
In multimedia retrieval, a query is typically interactively refined towards the ‘optimal’ answers by exploiting user feedback. However, in existing work, in each iteration, the refined query is re-evaluated. This is not only inefficient but fails to exploit the answers that may be common between iterations. In this paper, we introduce a new approach called SaveRF (Save random accesses in Relevance Feedback) for iterative relevance feedback search. SaveRF predicts the potential candidates for the next iteration and maintains this small set for efficient sequential scan. By doing so, repeated candidate accesses can be saved, hence reducing the number of random accesses. In addition, efficient scan on the overlap before the search starts also tightens the search space with smaller pruning radius. We implemented SaveRF and our experimental study on real life data sets show that it can reduce the I/O cost significantly.
Resumo:
A major impediment to developing real-time computer vision systems has been the computational power and level of skill required to process video streams in real-time. This has meant that many researchers have either analysed video streams off-line or used expensive dedicated hardware acceleration techniques. Recent software and hardware developments have greatly eased the development burden of realtime image analysis leading to the development of portable systems using cheap PC hardware and software exploiting the Multimedia Extension (MMX) instruction set of the Intel Pentium chip. This paper describes the implementation of a computationally efficient computer vision system for recognizing hand gestures using efficient coding and MMX-acceleration to achieve real-time performance on low cost hardware.
Resumo:
Discriminatory language became an important social issue in the west in the late twentieth century, when debates on political correctness and minority rights focused largely on the issue of respect in language. Japan is often criticized for having made only token attempts to address this issue. This paper investigates how one marginalized group—people with disabilities—has dealt with discriminatory and disrespectful language. The debate has been played out in four public spaces: the media, the law, literature, and the Internet. The paper discusses the kind of language, which has generated protest, the empowering strategies of direct action employed to combat its use, and the response of the media, the bureaucracy, and the literati. Government policy has not kept pace with social change in this area; where it exists at all, it is often contradictory and far from clear. I argue that while the laws were rewritten primarily as a result of external international trends, disability support groups achieved domestic media compliance by exploiting the keen desire of media organizations to avoid public embarrassment. In the absence of language policy formulated at the government level, the media effectively instituted a policy of self-censorship through strict guidelines on language use, thereby becoming its own best watchdog. Disability support groups have recently enlisted the Internet as an agent of further empowerment in the ongoing discussion of the issue.
Resumo:
The main aim of the proposed approach presented in this paper is to improve Web information retrieval effectiveness by overcoming the problems associated with a typical keyword matching retrieval system, through the use of concepts and an intelligent fusion of confidence values. By exploiting the conceptual hierarchy of the WordNet (G. Miller, 1995) knowledge base, we show how to effectively encode the conceptual information in a document using the semantic information implied by the words that appear within it. Rather than treating a word as a string made up of a sequence of characters, we consider a word to represent a concept.
Resumo:
Il sito archeologico di Arslantepe (provincia di Malatya, Turchia) rappresenta un caso di studio di potenziale interesse per l’interazione tra i mutamenti climatici e la storia della civiltà. Il sito, occupato quasi ininterrottamente per un periodo di tempo relativamente lungo (6250-2700 BP), ha fornito una grande quantità di reperti ossei, distribuiti lungo una stratigrafia archeologica relativamente dettagliata e supportata da datazioni al radiocarbonio. Tali reperti, indagati con le tecniche della geochimica degli isotopi stabili, possono costituire degli efficaci proxy paleoclimatici. In questo lavoro è stata studiata la composizione isotopica di 507 campioni di resti ossei umani e animali (prevalentemente pecore, capre, buoi). I rapporti isotopici studiati sono relativi a ossigeno (δ18Ocarb, δ18Oph), carbonio (δ13Ccarb, δ13Ccoll) e azoto (δ15N), misurati nella frazione minerale e organica dell’osso; la variabilità nel tempo di questi parametri, principalmente legati alla paleonutrizione, può essere correlata, direttamente o indirettamente, a cambiamenti dei parametri ambientali quali temperatura e umidità atmosferiche. I risultati indicano che la dieta degli animali selvatici e domestici di Arslantepe era quasi esclusivamente a base di piante a ciclo fotosintetico C3, generalmente tipiche di climi umidi o temperati. La presenza di piante C4, più tipiche di condizioni aride, sembrerebbe essere riconoscibile solamente nella dieta del bue (Bos taurus). La dieta umana era esclusivamente terrestre a base di cereali e carne di caprini con una percentuale esigua o del tutto assente di carne di maiale e bue. Dal punto di vista paleoclimatico il principale risultato del lavoro consiste nel riconoscimento della preservazione di un segnale paleoclimatico a lungo termine (δ18OW, composizione isotopica dell’ossigeno dell’acqua ingerita), che identifica un massimo relativo di umidità attorno al 5000 BP e che si correla, per andamento e ampiezza della variazione a record paleoclimatici di sedimenti lacustri collocati in regioni adiacenti all’area di studio. Sulla base del confronto dei tre segnali isotopici sono state inoltre riconosciute due anomalie climatiche siccitose a breve termine, apparentemente riferibili a due episodi di aridità a scala regionale documentati in letteratura.
Resumo:
This thesis presents the formal definition of a novel Mobile Cloud Computing (MCC) extension of the Networked Autonomic Machine (NAM) framework, a general-purpose conceptual tool which describes large-scale distributed autonomic systems. The introduction of autonomic policies in the MCC paradigm has proved to be an effective technique to increase the robustness and flexibility of MCC systems. In particular, autonomic policies based on continuous resource and connectivity monitoring help automate context-aware decisions for computation offloading. We have also provided NAM with a formalization in terms of a transformational operational semantics in order to fill the gap between its existing Java implementation NAM4J and its conceptual definition. Moreover, we have extended NAM4J by adding several components with the purpose of managing large scale autonomic distributed environments. In particular, the middleware allows for the implementation of peer-to-peer (P2P) networks of NAM nodes. Moreover, NAM mobility actions have been implemented to enable the migration of code, execution state and data. Within NAM4J, we have designed and developed a component, denoted as context bus, which is particularly useful in collaborative applications in that, if replicated on each peer, it instantiates a virtual shared channel allowing nodes to notify and get notified about context events. Regarding the autonomic policies management, we have provided NAM4J with a rule engine, whose purpose is to allow a system to autonomously determine when offloading is convenient. We have also provided NAM4J with trust and reputation management mechanisms to make the middleware suitable for applications in which such aspects are of great interest. To this purpose, we have designed and implemented a distributed framework, denoted as DARTSense, where no central server is required, as reputation values are stored and updated by participants in a subjective fashion. We have also investigated the literature regarding MCC systems. The analysis pointed out that all MCC models focus on mobile devices, and consider the Cloud as a system with unlimited resources. To contribute in filling this gap, we defined a modeling and simulation framework for the design and analysis of MCC systems, encompassing both their sides. We have also implemented a modular and reusable simulator of the model. We have applied the NAM principles to two different application scenarios. First, we have defined a hybrid P2P/cloud approach where components and protocols are autonomically configured according to specific target goals, such as cost-effectiveness, reliability and availability. Merging P2P and cloud paradigms brings together the advantages of both: high availability, provided by the Cloud presence, and low cost, by exploiting inexpensive peers resources. As an example, we have shown how the proposed approach can be used to design NAM-based collaborative storage systems based on an autonomic policy to decide how to distribute data chunks among peers and Cloud, according to cost minimization and data availability goals. As a second application, we have defined an autonomic architecture for decentralized urban participatory sensing (UPS) which bridges sensor networks and mobile systems to improve effectiveness and efficiency. The developed application allows users to retrieve and publish different types of sensed information by using the features provided by NAM4J's context bus. Trust and reputation is managed through the application of DARTSense mechanisms. Also, the application includes an autonomic policy that detects areas characterized by few contributors, and tries to recruit new providers by migrating code necessary to sensing, through NAM mobility actions.
Resumo:
Of the ~1.7 million SINE elements in the human genome, only a tiny number are estimated to be active in transcription by RNA polymerase (Pol) III. Tracing the individual loci from which SINE transcripts originate is complicated by their highly repetitive nature. By exploiting RNA-Seq datasets and unique SINE DNA sequences, we devised a bioinformatic pipeline allowing us to identify Pol III-dependent transcripts of individual SINE elements. When applied to ENCODE transcriptomes of seven human cell lines, this search strategy identified ~1300 Alu loci and ~1100 MIR loci corresponding to detectable transcripts, with ~120 and ~60 respectively Alu and MIR loci expressed in at least three cell lines. In vitro transcription of selected SINEs did not reflect their in vivo expression properties, and required the native 5’-flanking region in addition to internal promoter. We also identified a cluster of expressed AluYa5-derived transcription units, juxtaposed to snaR genes on chromosome 19, formed by a promoter-containing left monomer fused to an Alu-unrelated downstream moiety. Autonomous Pol III transcription was also revealed for SINEs nested within Pol II-transcribed genes raising the possibility of an underlying mechanism for Pol II gene regulation by SINE transcriptional units. Moreover the application of our bioinformatic pipeline to both RNA-seq data of cells subjected to an in vitro pro-oncogenic stimulus and of in vivo matched tumor and non-tumor samples allowed us to detect increased Alu RNA expression as well as the source loci of such deregulation. The ability to investigate SINE transcriptomes at single-locus resolution will facilitate both the identification of novel biologically relevant SINE RNAs and the assessment of SINE expression alteration under pathological conditions.
Resumo:
Molecular nanomagnets are spin clusters whose topology and magnetic interactions can be modulated at the level of the chemical synthesis. They are formed by a small number of transition metal ions coupled by the Heisenberg's exchange interactions. Each cluster is magnetically isolated from its neighbors by organic ligands, making each unit not interacting with the others. Therefore, we can investigate the magnetic properties of an isolated molecular nanomagnet by bulk measurements. The present thesis has been mostly devoted to the experimental investigation of the magnetic properties and spin dynamics of different classes of antiferromagnetic (AF) molecular rings. This study has been exploiting various techniques of investigations, such as Nuclear Magnetic Resonance (NMR), muon spin relaxation (muSR) and SQUiD magnetometry. We investigate the magnetic properties and the phonon-induced relaxation dynamics of the first regular Cr9 antiferromagnetic (AF) ring, which represents a prototype frustrated AF ring. The magnetically-open AF rings like Cr8Cd are model systems for the study of the microscopic magnetic behaviour of finite AF Heisenberg chains. In this type of system the different magnetic behaviour depends length and on the parity of the chain (odd or even). In order to study the local spin densities on the Cr sites, the Cr-NMR spectra was collected at low temperature. The experimental result confirm the theoretical predictions for the spin configuration. Finally, the study of Dy6, the first rare-earth based ring that has been ever synthesized, has been performed by AC-SQuID and muSR measurements. We found that the dynamics is characterized by more than one characteristic correlation time, whose values depend strongly on the applied field.
Resumo:
An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed (Becker S. and Hinton G., Nature, 355 (1992) 161). By exploiting a formal analogy to supervised learning in parity machines, the theory of zero-temperature Gibbs learning for the unsupervised procedure is presented for the case that the networks are perceptrons and for the case of fully connected committees.
Resumo:
We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multispin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems.