992 resultados para Research Platforms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Morison's equation is used for estimating internal solitary wave-induced forces exerted on SPAR and semi-submersible platforms. And the results we got have also been compared to ocean surface wave loading. It is shown that Morison's equation is an appropriate approach to estimate internal wave loading even for SPAR and semi-submersible platforms, and the internal solitary wave load on floating platforms is comparable to surface wave counterpart. Moreover, the effects of the layers with different thickness on internal solitary wave force are investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Outcomes of a piece of research into the accessibility of e-books. LTD

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Alliance for Coastal Technologies (ACT) Workshop on Towed Vehicles: Undulating Platforms As Tools for Mapping Coastal Processes and Water Quality Assessment was convened February 5-7,2007 at The Embassy Suites Hotel, Seaside, California and sponsored by the ACT-Pacific Coast partnership at the Moss Landing Marine Laboratories (MLML). The TUV workshop was co-chaired by Richard Burt (Chelsea Technology Group) and Stewart Lamerdin (MLML Marine Operations). Invited participants were selected to provide a uniform representation of the academic researchers, private sector product developers, and existing and potential data product users from the resource management community to enable development of broad consensus opinions on the application of TUV platforms in coastal resource assessment and management. The workshop was organized to address recognized limitations of point-based monitoring programs, which, while providing valuable data, are incapable of describing the spatial heterogeneity and the extent of features distributed in the bulk solution. This is particularly true as surveys approach the coastal zone where tidal and estuarine influences result in spatially and temporally heterogeneous water masses and entrained biological components. Aerial or satellite based remote sensing can provide an assessment of the aerial extent of plumes and blooms, yet provide no information regarding the third dimension of these features. Towed vehicles offer a cost-effective solution to this problem by providing platforms, which can sample in the horizontal, vertical, and time-based domains. Towed undulating vehicles (henceforth TUVs) represent useful platforms for event-response characterization. This workshop reviewed the current status of towed vehicle technology focusing on limitations of depth, data telemetry, instrument power demands, and ship requirements in an attempt to identify means to incorporate such technology more routinely in monitoring and event-response programs. Specifically, the participants were charged to address the following: (1) Summarize the state of the art in TUV technologies; (2) Identify how TUV platforms are used and how they can assist coastal managers in fulfilling their regulatory and management responsibilities; (3) Identify barriers and challenges to the application of TUV technologies in management and research activities, and (4) Recommend a series of community actions to overcome identified barriers and challenges. A series of plenary presentation were provided to enhance subsequent breakout discussions by the participants. Dave Nelson (University of Rhode Island) provided extensive summaries and real-world assessment of the operational features of a variety of TUV platforms available in the UNOLs scientific fleet. Dr. Burke Hales (Oregon State University) described the modification of TUV to provide a novel sampling platform for high resolution mapping of chemical distributions in near real time. Dr. Sonia Batten (Sir Alister Hardy Foundation for Ocean Sciences) provided an overview on the deployment of specialized towed vehicles equipped with rugged continuous plankton recorders on ships of opportunity to obtain long-term, basin wide surveys of zooplankton community structure, enhancing our understanding of trends in secondary production in the upper ocean. [PDF contains 32 pages]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Alliance for Coastal Technologies (ACT) Workshop entitled, "Biological Platforms as Sensor Technologies and their Use as Indicators for the Marine Environment" was held in Seward, Alaska, September 19 - 21,2007. The workshop was co-hosted by the University of Alaska Fairbanks (UAF) and the Alaska SeaLife Center (ASLC). The workshop was attended by 25 participants representing a wide range of research scientists, managers, and manufacturers who develop and deploy sensory equipment using aquatic vertebrates as the mode of transport. Eight recommendations were made by participants at the conclusion of the workshop and are presented here without prioritization: 1. Encourage research toward development of energy scavenging devices of suitable sizes for use in remote sensing packages attached to marine animals. 2. Encourage funding sources for development of new sensor technologies and animal-borne tags. 3. Develop animal-borne environmental sensor platforms that offer more combined systems and improved data recovery methodologies, and expand the geographic scope of complementary fixed sensor arrays. 4. Engage the oceanographic community by: a. Offering a mini workshop at an AGU ocean sciences conference for people interested in developing an ocean carbon program that utilizes animal-borne sensor technology. b. Outreach to chemical oceanographers. 5. Min v2d6.sheepserver.net e and merge technologies from other disciplines that may be applied to marine sensors (e.g. biomedical field). 6. Encourage the NOAA Permitting Office to: a. Make a more predictable, reliable, and consistent permitting system for using animal platforms. b. Establish an evaluation process. c. Adhere to established standards. 7. Promote the expanded use of calibrated hydrophones as part of existing animal platforms. 8. Encourage the Integrated Ocean Observing System (IOOS) to promote animal tracking as effective samplers of the marine environment, and use of animals as ocean sensor technology platforms. [PDF contains 20 pages]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Between 1995 and 2002, we surveyed fish assemblages at seven oil platforms off southern and central California using the manned research submersible Delta. At each platform, there is a large horizontal beam situated at or near the sea floor. In some instances, shells and sediment have buried this beam and in other instances it is partially or completely exposed. We found that fish species responded in various ways to the amount of exposure of the beam. A few species, such as blackeye goby (Rhinogobiops nicholsii), greenstriped rockfish (Sebastes elongatus), and pink seaperch (Zalembius rosaceus) tended to avoid the beam. However, many species that typically associate with natural rocky outcrops, such as bocaccio (S. paucispinis), cowcod (S. levis), copper (S. caurinus), greenblotched (S. rosenblatti), pinkrose (S. simulator) and vermilion (S. miniatus) rockfishes, were found most often where the beam was exposed. In particular, a group of species (e.g., bocaccio, cowcod, blue (Sebastes mystinus), and vermilion rockfishes) called here the “sheltering habitat” guild, lived primarily where the beam was exposed and formed a crevice. This work demonstrates that the presence of sheltering sites is important in determining the species composition of man-made reefs and, likely, natural reefs. This research also indicates that adding structures that form sheltering sites in and around decommissioned platforms will likely lead to higher densities of many species typical of hard and complex structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with several of the most important aspects of Competence-Based Learning (CBL): course authoring, assignments, and categorization of learning content. The latter is part of the so-called Bologna Process (BP) and can effectively be supported by integrating knowledge resources like, e.g., standardized skill and competence taxonomies into the target implementation approach, aiming at making effective use of an open integration architecture while fostering the interoperability of hybrid knowledge-based e-learning solutions. Modern scenarios ask for interoperable software solutions to seamlessly integrate existing e-learning infrastructures and legacy tools with innovative technologies while being cognitively efficient to handle. In this way, prospective users are enabled to use them without learning overheads. At the same time, methods of Learning Design (LD) in combination with CBL are getting more and more important for production and maintenance of easy to facilitate solutions. We present our approach of developing a competence-based course-authoring and assignment support software. It is bridging the gaps between contemporary Learning Management Systems (LMS) and established legacy learning infrastructures by embedding existing resources via Learning Tools Interoperability (LTI). Furthermore, the underlying conceptual architecture for this integration approach will be explained. In addition, a competence management structure based on knowledge technologies supporting standardized skill and competence taxonomies will be introduced. The overall goal is to develop a software solution which will not only flawlessly merge into a legacy platform and several other learning environments, but also remain intuitively usable. As a proof of concept, the so-called platform independent conceptual architecture model will be validated by a concrete use case scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance evaluation of parallel software and architectural exploration of innovative hardware support face a common challenge with emerging manycore platforms: they are limited by the slow running time and the low accuracy of software simulators. Manycore FPGA prototypes are difficult to build, but they offer great rewards. Software running on such prototypes runs orders of magnitude faster than current simulators. Moreover, researchers gain significant architectural insight during the modeling process. We use the Formic FPGA prototyping board [1], which specifically targets scalable and cost-efficient multi-board prototyping, to build and test a 64-board model of a 512-core, MicroBlaze-based, non-coherent hardware prototype with a full network-on-chip in a 3D-mesh topology. We expand the hardware architecture to include the ARM Versatile Express platforms and build a 520-core heterogeneous prototype of 8 Cortex-A9 cores and 512 MicroBlaze cores. We then develop an MPI library for the prototype and evaluate it extensively using several bare-metal and MPI benchmarks. We find that our processor prototype is highly scalable, models faithfully single-chip multicore architectures, and is a very efficient platform for parallel programming research, being 50,000 times faster than software simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing adoption of cloud computing, social networking, mobile and big data technologies provide challenges and opportunities for both research and practice. Researchers face a deluge of data generated by social network platforms which is further exacerbated by the co-mingling of social network platforms and the emerging Internet of Everything. While the topicality of big data and social media increases, there is a lack of conceptual tools in the literature to help researchers approach, structure and codify knowledge from social media big data in diverse subject matter domains, many of whom are from nontechnical disciplines. Researchers do not have a general-purpose scaffold to make sense of the data and the complex web of relationships between entities, social networks, social platforms and other third party databases, systems and objects. This is further complicated when spatio-temporal data is introduced. Based on practical experience of working with social media datasets and existing literature, we propose a general research framework for social media research using big data. Such a framework assists researchers in placing their contributions in an overall context, focusing their research efforts and building the body of knowledge in a given discipline area using social media data in a consistent and coherent manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern approaches to biomedical research and diagnostics targeted towards precision medicine are generating ‘big data’ across a range of high-throughput experimental and analytical platforms. Integrative analysis of this rich clinical, pathological, molecular and imaging data represents one of the greatest bottlenecks in biomarker discovery research in cancer and other diseases. Following on from the publication of our successful framework for multimodal data amalgamation and integrative analysis, Pathology Integromics in Cancer (PICan), this article will explore the essential elements of assembling an integromics framework from a more detailed perspective. PICan, built around a relational database storing curated multimodal data, is the research tool sitting at the heart of our interdisciplinary efforts to streamline biomarker discovery and validation. While recognizing that every institution has a unique set of priorities and challenges, we will use our experiences with PICan as a case study and starting point, rationalizing the design choices we made within the context of our local infrastructure and specific needs, but also highlighting alternative approaches that may better suit other programmes of research and discovery. Along the way, we stress that integromics is not just a set of tools, but rather a cohesive paradigm for how modern bioinformatics can be enhanced. Successful implementation of an integromics framework is a collaborative team effort that is built with an eye to the future and greatly accelerates the processes of biomarker discovery, validation and translation into clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aims to provide a better understanding on how firms stimulate knowledge sharing through the utilization of collaboration tools, in particular Emergent Social Software Platforms (ESSPs). It focuses on the distinctive applications of ESSPs and on the initiatives contributing to maximize its advantages. In the first part of the research, I have itemized all types of existing collaboration tools and classify them in different categories according to their capabilities, objectives and according to their faculty for promoting knowledge sharing. In the second part, and based on an exploratory case study at Cisco Systems, I have identified the main applications of an existing enterprise social software platform named Webex Social. By combining a qualitative and quantitative approach, as well as combining data collected from survey’s results and from the analysis of the company’s documents, I am expecting to maximize the outcome of this investigation and reduce the risk of bias. Although effects cannot be universalized based on one single case study, some utilization patterns have been underlined from the data collected and potential trends in managing knowledge have been observed. The results of the research have also enabled identifying most of the constraints experienced by the users of the firm’s social software platform. Utterly, this research should provide a primary framework for firms planning to create or implement a social software platform and for firms willing to increase adoption levels and to promote the overall participation of users. It highlights the common traps that should be avoided by developers when designing a social software platform and the capabilities that it should inherently carry to support an effective knowledge management strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Field lab: Entrepreneurial and innovative ventures

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Afin d’adresser la variabilité interindividuelle observée dans la réponse pharmacocinétique à de nombreux médicaments, nous avons créé un panel de génotypage personnalisée en utilisant des méthodes de conception et d’élaboration d’essais uniques. Celles-ci ont pour but premier de capturer les variations génétiques présentent dans les gènes clés impliqués dans les processus d'absorption, de distribution, de métabolisme et d’excrétion (ADME) de nombreux agents thérapeutiques. Bien que ces gènes et voies de signalement sont impliqués dans plusieurs mécanismes pharmacocinétiques qui sont bien connues, il y a eu jusqu’à présent peu d'efforts envers l’évaluation simultanée d’un grand nombre de ces gènes moyennant un seul outil expérimental. La recherche pharmacogénomique peut être réalisée en utilisant deux approches: 1) les marqueurs fonctionnels peuvent être utilisés pour présélectionner ou stratifier les populations de patients en se basant sur des états métaboliques connus; 2) les marqueurs Tag peuvent être utilisés pour découvrir de nouvelles corrélations génotype-phénotype. Présentement, il existe un besoin pour un outil de recherche qui englobe un grand nombre de gènes ADME et variantes et dont le contenu est applicable à ces deux modèles d'étude. Dans le cadre de cette thèse, nous avons développé un panel d’essais de génotypage de 3,000 marqueurs génétiques ADME qui peuvent satisfaire ce besoin. Dans le cadre de ce projet, les gènes et marqueurs associés avec la famille ADME ont été sélectionnés en collaboration avec plusieurs groupes du milieu universitaire et de l'industrie pharmaceutique. Pendant trois phases de développement de cet essai de génotypage, le taux de conversion pour 3,000 marqueurs a été amélioré de 83% à 97,4% grâce à l'incorporation de nouvelles stratégies ayant pour but de surmonter les zones d'interférence génomiques comprenant entre autres les régions homologues et les polymorphismes sous-jacent les régions d’intérêt. La précision du panel de génotypage a été validée par l’évaluation de plus de 200 échantillons pour lesquelles les génotypes sont connus pour lesquels nous avons obtenu une concordance > 98%. De plus, une comparaison croisée entre nos données provenant de cet essai et des données obtenues par différentes plateformes technologiques déjà disponibles sur le marché a révélé une concordance globale de > 99,5%. L'efficacité de notre stratégie de conception ont été démontrées par l'utilisation réussie de cet essai dans le cadre de plusieurs projets de recherche où plus de 1,000 échantillons ont été testés. Nous avons entre autre évalué avec succès 150 échantillons hépatiques qui ont été largement caractérisés pour plusieurs phénotypes. Dans ces échantillons, nous avons pu valider 13 gènes ADME avec cis-eQTL précédemment rapportés et de découvrir et de 13 autres gènes ADME avec cis eQTLs qui n'avaient pas été observés en utilisant des méthodes standard. Enfin, à l'appui de ce travail, un outil logiciel a été développé, Opitimus Primer, pour aider pour aider au développement du test. Le logiciel a également été utilisé pour aider à l'enrichissement de cibles génomiques pour d'expériences séquençage. Le contenu ainsi que la conception, l’optimisation et la validation de notre panel le distingue largement de l’ensemble des essais commerciaux couramment disponibles sur le marché qui comprennent soit des marqueurs fonctionnels pour seulement un petit nombre de gènes, ou alors n’offre pas une couverture adéquate pour les gènes connus d’ADME. Nous pouvons ainsi conclure que l’essai que nous avons développé est et continuera certainement d’être un outil d’une grande utilité pour les futures études et essais cliniques dans le domaine de la pharmacocinétique, qui bénéficieraient de l'évaluation d'une longue liste complète de gènes d’ADME.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, progress in the area of mobile telecommunications has changed our way of life, in the private as well as the business domain. Mobile and wireless networks have ever increasing bit rates, mobile network operators provide more and more services, and at the same time costs for the usage of mobile services and bit rates are decreasing. However, mobile services today still lack functions that seamlessly integrate into users’ everyday life. That is, service attributes such as context-awareness and personalisation are often either proprietary, limited or not available at all. In order to overcome this deficiency, telecommunications companies are heavily engaged in the research and development of service platforms for networks beyond 3G for the provisioning of innovative mobile services. These service platforms are to support such service attributes. Service platforms are to provide basic service-independent functions such as billing, identity management, context management, user profile management, etc. Instead of developing own solutions, developers of end-user services such as innovative messaging services or location-based services can utilise the platform-side functions for their own purposes. In doing so, the platform-side support for such functions takes away complexity, development time and development costs from service developers. Context-awareness and personalisation are two of the most important aspects of service platforms in telecommunications environments. The combination of context-awareness and personalisation features can also be described as situation-dependent personalisation of services. The support for this feature requires several processing steps. The focus of this doctoral thesis is on the processing step, in which the user’s current context is matched against situation-dependent user preferences to find the matching user preferences for the current user’s situation. However, to achieve this, a user profile management system and corresponding functionality is required. These parts are also covered by this thesis. Altogether, this thesis provides the following contributions: The first part of the contribution is mainly architecture-oriented. First and foremost, we provide a user profile management system that addresses the specific requirements of service platforms in telecommunications environments. In particular, the user profile management system has to deal with situation-specific user preferences and with user information for various services. In order to structure the user information, we also propose a user profile structure and the corresponding user profile ontology as part of an ontology infrastructure in a service platform. The second part of the contribution is the selection mechanism for finding matching situation-dependent user preferences for the personalisation of services. This functionality is provided as a sub-module of the user profile management system. Contrary to existing solutions, our selection mechanism is based on ontology reasoning. This mechanism is evaluated in terms of runtime performance and in terms of supported functionality compared to other approaches. The results of the evaluation show the benefits and the drawbacks of ontology modelling and ontology reasoning in practical applications.