879 resultados para next generation sequencing
Resumo:
After the 2010 Haiti earthquake, that hits the city of Port-au-Prince, capital city of Haiti, a multidisciplinary working group of specialists (seismologist, geologists, engineers and architects) from different Spanish Universities and also from Haiti, joined effort under the SISMO-HAITI project (financed by the Universidad Politecnica de Madrid), with an objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. In this paper, as a first step for a structural damage estimation of future earthquakes in the country, a calibration of damage functions has been carried out by means of a two-stage procedure. After compiling a database with observed damage in the city after the earthquake, the exposure model (building stock) has been classified and through an iteratively two-step calibration process, a specific set of damage functions for the country has been proposed. Additionally, Next Generation Attenuation Models (NGA) and Vs30 models have been analysed to choose the most appropriate for the seismic risk estimation in the city. Finally in a next paper, these functions will be used to estimate a seismic risk scenario for a future earthquake.
Resumo:
PAS1192-2 (2013) outlines the “fundamental principles of Level 2 information modeling”, one of these principles is the use of what is commonly referred to as a Common Data Environment (CDE). A CDE could be described as an internet-enabled cloudhosting platform, accessible to all construction team members to access shared project information. For the construction sector to achieve increased productivity goals, the next generation of industry professionals will need to be educated in a way that provides them with an appreciation of Building Information Modelling (BIM) working methods, at all levels, including an understanding of how data in a CDE should be structured, managed, shared and published. This presents a challenge for educational institutions in terms of providing a CDE that addresses the requirements set out in PAS1192-2, and mirrors organisational and professional working practices without causing confusion due to over complexity. This paper presents the findings of a two-year study undertaken at Ulster University comparing the use of a leading industry CDE platform with one derived from the in-house Virtual Learning Environment (VLE), for the delivery of a student BIM project. The research methodology employed was a qualitative case study analysis, focusing on observations from the academics involved and feedback from students. The results of the study show advantages for both CDE platforms depending on the learning outcomes required.
Resumo:
This paper assesses the uses and misuses in the application of the European Arrest Warrant (EAW) system in the European Union. It examines the main quantitative results of this extradition system achieved between 2005 and 2011 on the basis of the existing statistical knowledge on its implementation at EU official levels. The EAW has been anchored in a high level of ‘mutual trust’ between the participating states’ criminal justice regimes and authorities. This reciprocal confidence, however, has been subject to an increasing number of challenges resulting from its practical application, presenting a dual conundrum: 1. Principle of proportionality: Who are the competent judicial authorities cooperating with each other and ensuring that there are sufficient impartial controls over the necessity and proportionality of the decisions on the issuing and execution of EAWs? 2. Principle of division of powers: How can criminal justice authorities be expected to handle different criminal judicial traditions in what is supposed to constitute a ‘serious’ or ‘minor’ crime in their respective legal settings and ‘who’ is ultimately to determine (divorced from political considerations) when is it duly justified to make the EAW system operational? It is argued that the next generation of the EU’s criminal justice cooperation and the EAW need to recognise and acknowledge that the mutual trust premise upon which the European system has been built so far is no longer viable without devising new EU policy stakeholders’ structures and evaluation mechanisms. These should allow for the recalibration of mutual trust and mistrust in EU justice systems in light of the experiences of the criminal justice actors and practitioners having a stake in putting the EAW into daily effect. Such a ‘bottom-up approach’ should be backed up with the best impartial and objective evaluation, an improved system of statistical collection and an independent qualitative assessment of its implementation. This should be placed as the central axis of a renewed EAW framework which should seek to better ensure the accountability, impartial (EU-led) scrutiny and transparency of member states’ application of the EAW in light of the general principles and fundamental rights constituting the foundations of the European system of criminal justice cooperation.
Resumo:
Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014
Resumo:
Is Europe's immigration policy attractive? One of the priorities driving current EU debates on labour immigration policies is the perceived need to boost Europe's attractiveness vis-á-vis 'talented' and 'highly skilled' immigrants. The EU sees itself playing a role in persuading immigrants to choose Europe over other competing destinations, such as the US or Canada. This book critically examines the determinants and challenges characterising discussions focused on the attractiveness of labour migration policies in the EU as well as other international settings. It calls for re-thinking some of the most commonly held premises and assumptions underlying the narratives of ‘attractiveness’ and ‘global competition for talent’ in migration policy debates. How can an immigration policy, in fact, be made to be ‘attractive’ and what are the incentives at play (if any)? A multidisciplinary team of leading scholars and experts in migration studies address the main issues and challenges related to the role played by rights and discrimination, qualifications and skills, and matching demand and supply in needs-based migration policies. The experiences in other jurisdictions such as South America, Canada and the United States are also covered: Are these countries indeed so ‘attractive’ and ‘competitive’, and if so what makes them more attractive than the EU? On the basis of the discussions and findings presented across the various contributions, the book identifies a number of priorities for policy formulation and design in the next generation of EU labour migration policies. In particular, it highlights important initiatives that the new European Commission should focus on in the years to come.
Resumo:
The Transatlantic Trade and Investment Partnership (TTIP) is an effort by the United States and the European Union to reposition themselves for a world of diffuse economic power and intensified global competition. It is a next-generation economic negotiation that breaks the mould of traditional trade agreements. At the heart of the ongoing talks is the question whether and in which areas the two major democratic actors in the global economy can address costly frictions generated by their deep commercial integration by aligning rules and other instruments. The aim is to reduce duplication in various ways in areas where levels of regulatory protection are equivalent as well as to foster wide-ranging regulatory cooperation and set a benchmark for high-quality global norms. In this volume, European and American experts explain the economic context of TTIP and its geopolitical implications, and then explore the challenges and consequences of US-EU negotiations across numerous sensitive areas, ranging from food safety and public procurement to economic and regulatory assessments of technical barriers to trade, automotive, chemicals, energy, services, investor-state dispute settlement mechanisms and regulatory cooperation. Their insights cut through the confusion and tremendous public controversies now swirling around TTIP, and help decision-makers understand how the United States and the European Union can remain rule-makers rather than rule-takers in a globalising world in which their relative influence is waning.
Resumo:
The first centromeric protein identified in any species was CENP-A, a divergent member of the histone H3 family that was recognised by autoantibodies from patients with scleroderma-spectrum disease. It has recently been suggested to rename this protein CenH3. Here, we argue that the original name should be maintained both because it is the basis of a long established nomenclature for centromere proteins and because it avoids confusion due to the presence of canonical histone H3 at centromeres.
Resumo:
Inducible epigenetic changes in eukaryotes are believed to enable rapid adaptation to environmental fluctuations. We have found distinct regions of the Arabidopsis genome that are susceptible to DNA (de)methylation in response to hyperosmotic stress. The stress-induced epigenetic changes are associated with conditionally heritable adaptive phenotypic stress responses. However, these stress responses are primarily transmitted to the next generation through the female lineage due to widespread DNA glycosylase activity in the male germline, and extensively reset in the absence of stress. Using the CNI1/ATL31 locus as an example, we demonstrate that epigenetically targeted sequences function as distantly-acting control elements of antisense long non-coding RNAs, which in turn regulate targeted gene expression in response to stress. Collectively, our findings reveal that plants use a highly dynamic maternal 'short-term stress memory' with which to respond to adverse external conditions. This transient memory relies on the DNA methylation machinery and associated transcriptional changes to extend the phenotypic plasticity accessible to the immediate offspring.
Resumo:
Abstract Mobile Edge Computing enables the deployment of services, applications, content storage and processing in close proximity to mobile end users. This highly distributed computing environment can be used to provide ultra-low latency, precise positional awareness and agile applications, which could significantly improve user experience. In order to achieve this, it is necessary to consider next-generation paradigms such as Information-Centric Networking and Cloud Computing, integrated with the upcoming 5th Generation networking access. A cohesive end-to-end architecture is proposed, fully exploiting Information-Centric Networking together with the Mobile Follow-Me Cloud approach, for enhancing the migration of content-caches located at the edge of cloudified mobile networks. The chosen content-relocation algorithm attains content-availability improvements of up to 500 when a mobile user performs a request and compared against other existing solutions. The performed evaluation considers a realistic core-network, with functional and non-functional measurements, including the deployment of the entire system, computation and allocation/migration of resources. The achieved results reveal that the proposed architecture is beneficial not only from the users’ perspective but also from the providers point-of-view, which may be able to optimize their resources and reach significant bandwidth savings.
Resumo:
Building on institutional theory and family sociology literature we explore the logics that underlie the formation of transaction price expectations related to the intergenerational transfer of corporate ownership in private family firms. By probing a sample of 3'487 students with family business background from 20 countries we show that next generation family members expect to receive a 56.58% discount in comparison to some nonfamily buyer (i.e. the family discount) when taking over the parent's firm. We also show that the logic underlying the formation of family discount expectations is characterized by parental altruism, filial reciprocity, filial decency and parental inducement. These norms embrace both the family and market logics and accommodate the duties and demands of children and parents in determining a fair transfer price. These findings are important for institutional theory as well as for family business and entrepreneurial exit literatures.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
This paper argues for the systematic development and presentation of evidence-based guidelines for appropriate use of computers by children. The currently available guidelines are characterised and a proposed conceptual model presented. Five principles are presented as a foundation to the guidelines. The paper concludes with a framework for the guidelines, key evidence for and against guidelines, and gaps in the available evidence, with the aim of facilitating further discussion. Relevance to industry The current generation of children in affluent countries will typically have over 10 years of computer experience before they enter the workforce. Consequently, the primary prevention of computer-related health disorders and the development of good productivity skills for the next generation of workers needs to occur during childhood. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In many Environmental Information Systems the actual observations arise from a discrete monitoring network which might be rather heterogeneous in both location and types of measurements made. In this paper we describe the architecture and infrastructure for a system, developed as part of the EU FP6 funded INTAMAP project, to provide a service oriented solution that allows the construction of an interoperable, automatic, interpolation system. This system will be based on the Open Geospatial Consortium’s Web Feature Service (WFS) standard. The essence of our approach is to extend the GML3.1 observation feature to include information about the sensor using SensorML, and to further extend this to incorporate observation error characteristics. Our extended WFS will accept observations, and will store them in a database. The observations will be passed to our R-based interpolation server, which will use a range of methods, including a novel sparse, sequential kriging method (only briefly described here) to produce an internal representation of the interpolated field resulting from the observations currently uploaded to the system. The extended WFS will then accept queries, such as ‘What is the probability distribution of the desired variable at a given point’, ‘What is the mean value over a given region’, or ‘What is the probability of exceeding a certain threshold at a given location’. To support information-rich transfer of complex and uncertain predictions we are developing schema to represent probabilistic results in a GML3.1 (object-property) style. The system will also offer more easily accessible Web Map Service and Web Coverage Service interfaces to allow users to access the system at the level of complexity they require for their specific application. Such a system will offer a very valuable contribution to the next generation of Environmental Information Systems in the context of real time mapping for monitoring and security, particularly for systems that employ a service oriented architecture.
Resumo:
Ad hoc wireless sensor networks (WSNs) are formed from self-organising configurations of distributed, energy constrained, autonomous sensor nodes. The service lifetime of such sensor nodes depends on the power supply and the energy consumption, which is typically dominated by the communication subsystem. One of the key challenges in unlocking the potential of such data gathering sensor networks is conserving energy so as to maximize their post deployment active lifetime. This thesis described the research carried on the continual development of the novel energy efficient Optimised grids algorithm that increases the WSNs lifetime and improves on the QoS parameters yielding higher throughput, lower latency and jitter for next generation of WSNs. Based on the range and traffic relationship the novel Optimised grids algorithm provides a robust traffic dependent energy efficient grid size that minimises the cluster head energy consumption in each grid and balances the energy use throughout the network. Efficient spatial reusability allows the novel Optimised grids algorithm improves on network QoS parameters. The most important advantage of this model is that it can be applied to all one and two dimensional traffic scenarios where the traffic load may fluctuate due to sensor activities. During traffic fluctuations the novel Optimised grids algorithm can be used to re-optimise the wireless sensor network to bring further benefits in energy reduction and improvement in QoS parameters. As the idle energy becomes dominant at lower traffic loads, the new Sleep Optimised grids model incorporates the sleep energy and idle energy duty cycles that can be implemented to achieve further network lifetime gains in all wireless sensor network models. Another key advantage of the novel Optimised grids algorithm is that it can be implemented with existing energy saving protocols like GAF, LEACH, SMAC and TMAC to further enhance the network lifetimes and improve on QoS parameters. The novel Optimised grids algorithm does not interfere with these protocols, but creates an overlay to optimise the grids sizes and hence transmission range of wireless sensor nodes.
Resumo:
Wireless Mesh Networks (WMNs) have emerged as a key technology for the next generation of wireless networking. Instead ofbeing another type of ad-hoc networking, WMNs diversify the capabilities of ad-hoc networks. There are many kinds of protocols that work over WMNs, such as IEEE 802.11a/b/g, 802.15 and 802.16. To bring about a high throughput under varying conditions, these protocols have to adapt their transmission rate. While transmission rate is a significant part, only a few algorithms such as Auto Rate Fallback (ARF) or Receiver Based Auto Rate (RBAR) have been published. In this paper we will show MAC, packet loss and physical layer conditions play important role for having good channel condition. Also we perform rate adaption along with multiple packet transmission for better throughput. By allowing for dynamically monitored, multiple packet transmission and adaptation to changes in channel quality by adjusting the packet transmission rates according to certain optimization criteria improvements in performance can be obtained. The proposed method is the detection of channel congestion by measuring the fluctuation of signal to the standard deviation of and the detection of packet loss before channel performance diminishes. We will show that the use of such techniques in WMN can significantly improve performance. The effectiveness of the proposed method is presented in an experimental wireless network testbed via packet-level simulation. Our simulation results show that regardless of the channel condition we were to improve the performance in the throughput.