999 resultados para Exploitation Process
Resumo:
At present, a fraction of 0.1 - 0.2% of the patients undergoing surgery become aware during the process. The situation is referred to as anesthesia awareness and is obviously very traumatic for the person experiencing it. The reason for its occurrence is mostly an insufficient dosage of the narcotic Propofol combined with the incapability of the technology monitoring the depth of the patient’s anesthetic state to notice the patient becoming aware. A solution can be a highly sensitive and selective real time monitoring device for Propofol based on optical absorption spectroscopy. Its working principle has been postulated by Prof. Dr. habil. H. Hillmer and formulated in DE10 2004 037 519 B4, filed on Aug 30th, 2004. It consists of the exploitation of Intra Cavity Absorption effects in a two mode laser system. In this Dissertation, a two mode external cavity semiconductor laser, which has been developed previously to this work is enhanced and optimized to a functional sensor. Enhancements include the implementation of variable couplers into the system and the implementation of a collimator arrangement into which samples can be introduced. A sample holder and cells are developed and characterized with a focus on compatibility with the measurement approach. Further optimization concerns the overall performance of the system: scattering sources are reduced by re-splicing all fiber-to-fiber connections, parasitic cavities are eliminated by suppressing the Fresnel reflexes of all one fiber ends by means of optical isolators and wavelength stability of the system is improved by the implementation of thermal insulation to the Fiber Bragg Gratings (FBG). The final laser sensor is characterized in detail thermally and optically. Two separate modes are obtained at 1542.0 and 1542.5 nm, tunable in a range of 1nm each. Mode Full Width at Half Maximum (FWHM) is 0.06nm and Signal to Noise Ratio (SNR) is as high as 55 dB. Independent of tuning the two modes of the system can always be equalized in intensity, which is important as the delicacy of the intensity equilibrium is one of the main sensitivity enhancing effects formulated in DE10 2004 037 519 B4. For the proof of concept (POC) measurements the target substance Propofol is diluted in the solvents Acetone and DiChloroMethane (DCM), which have been investigated for compatibility with Propofol beforehand. Eight measurement series (two solvents, two cell lengths and two different mode spacings) are taken, which draw a uniform picture: mode intensity ratio responds linearly to an increase of Propofol in all cases. The slope of the linear response indicates the sensitivity of the system. The eight series are split up into two groups: measurements taken in long cells and measurements taken in short cells.
Resumo:
An important challenge for conservation today is to understand the endangerment process and identify any generalized patterns in how threats occur and aggregate across taxa. Here we use a global database describing main current external threats in mammals to evaluate the prevalence of distinct threatening processes, primarily of anthropogenic origin, and to identify generalized drivers of extinction and their association with vulnerability status and intrinsic species' traits. We detect several primary threat combinations that are generally associated with distinct species. In particular, large and widely distributed mammals are affected by combinations of direct exploitation and threats associated with increasing landscape modification that go from logging to intense human land-use. Meanwhile, small, narrowly distributed species are affected by intensifying levels of landscape modification but are not directly exploited. In general more vulnerable species are affected by a greater number of threats, suggesting increased extinction risk is associated with the accumulation of external threats. Overall, our findings show that endangerment in mammals is strongly associated with increasing habitat loss and degradation caused by human land-use intensification. For large and widely distributed mammals there is the additional risk of being hunted.
Resumo:
While the inventor is often the driver of an invention in the early stages, he/she needs to move between different social networks for knowledge in order to create and capture value. The main objective of this research is to propose a literature-based framework based on innovation network theory and complemented with C-K theory, in order to analyze the invention/innovation process of inventors and the product concepts in a packaging industry context. Empirical input from three case studies of packaging inventions and their inventors is used to elaborate the suggested framework.The article identifies important gaps in the literature of innovation networks. This is addressed through a theoretical framework based on network theories, complemented with C-K theory for the product design level. The strength-of-ties dimension of the theoretical framework suggests, in agreement with the mainstream literature and the cases presented, that weak ties are required to access the knowledge related to exploration networks and strong ties are required to utilize the knowledge in the exploitation network. The transformation network is an intermediate step acting as a bridge where entrepreneurs can find required knowledge. The transformation network is also an intermediate step where entrepreneurs find financing and companies interested in commercializing inventions. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.
Resumo:
We present a novel surrogate model-based global optimization framework allowing a large number of function evaluations. The method, called SpLEGO, is based on a multi-scale expected improvement (EI) framework relying on both sparse and local Gaussian process (GP) models. First, a bi-objective approach relying on a global sparse GP model is used to determine potential next sampling regions. Local GP models are then constructed within each selected region. The method subsequently employs the standard expected improvement criterion to deal with the exploration-exploitation trade-off within selected local models, leading to a decision on where to perform the next function evaluation(s). The potential of our approach is demonstrated using the so-called Sparse Pseudo-input GP as a global model. The algorithm is tested on four benchmark problems, whose number of starting points ranges from 102 to 104. Our results show that SpLEGO is effective and capable of solving problems with large number of starting points, and it even provides significant advantages when compared with state-of-the-art EI algorithms.
Resumo:
Automated and semi-automated accessibility evaluation tools are key to streamline the process of accessibility assessment, and ultimately ensure that software products, contents, and services meet accessibility requirements. Different evaluation tools may better fit different needs and concerns, accounting for a variety of corporate and external policies, content types, invocation methods, deployment contexts, exploitation models, intended audiences and goals; and the specific overall process where they are introduced. This has led to the proliferation of many evaluation tools tailored to specific contexts. However, tool creators, who may be not familiar with the realm of accessibility and may be part of a larger project, lack any systematic guidance when facing the implementation of accessibility evaluation functionalities. Herein we present a systematic approach to the development of accessibility evaluation tools, leveraging the different artifacts and activities of a standardized development process model (the Unified Software Development Process), and providing templates of these artifacts tailored to accessibility evaluation tools. The work presented specially considers the work in progress in this area by the W3C/WAI Evaluation and Report Working Group (ERT WG)
Resumo:
RDB to RDF Mapping Language (R2RML) es una recomendación del W3C que permite especificar reglas para transformar bases de datos relacionales a RDF. Estos datos en RDF se pueden materializar y almacenar en un sistema gestor de tripletas RDF (normalmente conocidos con el nombre triple store), en el cual se pueden evaluar consultas SPARQL. Sin embargo, hay casos en los cuales la materialización no es adecuada o posible, por ejemplo, cuando la base de datos se actualiza frecuentemente. En estos casos, lo mejor es considerar los datos en RDF como datos virtuales, de tal manera que las consultas SPARQL anteriormente mencionadas se traduzcan a consultas SQL que se pueden evaluar sobre los sistemas gestores de bases de datos relacionales (SGBD) originales. Para esta traducción se tienen en cuenta los mapeos R2RML. La primera parte de esta tesis se centra en la traducción de consultas. Se propone una formalización de la traducción de SPARQL a SQL utilizando mapeos R2RML. Además se proponen varias técnicas de optimización para generar consultas SQL que son más eficientes cuando son evaluadas en sistemas gestores de bases de datos relacionales. Este enfoque se evalúa mediante un benchmark sintético y varios casos reales. Otra recomendación relacionada con R2RML es la conocida como Direct Mapping (DM), que establece reglas fijas para la transformación de datos relacionales a RDF. A pesar de que ambas recomendaciones se publicaron al mismo tiempo, en septiembre de 2012, todavía no se ha realizado un estudio formal sobre la relación entre ellas. Por tanto, la segunda parte de esta tesis se centra en el estudio de la relación entre R2RML y DM. Se divide este estudio en dos partes: de R2RML a DM, y de DM a R2RML. En el primer caso, se estudia un fragmento de R2RML que tiene la misma expresividad que DM. En el segundo caso, se representan las reglas de DM como mapeos R2RML, y también se añade la semántica implícita (relaciones de subclase, 1-N y M-N) que se puede encontrar codificada en la base de datos. Esta tesis muestra que es posible usar R2RML en casos reales, sin necesidad de realizar materializaciones de los datos, puesto que las consultas SQL generadas son suficientemente eficientes cuando son evaluadas en el sistema gestor de base de datos relacional. Asimismo, esta tesis profundiza en el entendimiento de la relación existente entre las dos recomendaciones del W3C, algo que no había sido estudiado con anterioridad. ABSTRACT. RDB to RDF Mapping Language (R2RML) is a W3C recommendation that allows specifying rules for transforming relational databases into RDF. This RDF data can be materialized and stored in a triple store, so that SPARQL queries can be evaluated by the triple store. However, there are several cases where materialization is not adequate or possible, for example, if the underlying relational database is updated frequently. In those cases, RDF data is better kept virtual, and hence SPARQL queries over it have to be translated into SQL queries to the underlying relational database system considering that the translation process has to take into account the specified R2RML mappings. The first part of this thesis focuses on query translation. We discuss the formalization of the translation from SPARQL to SQL queries that takes into account R2RML mappings. Furthermore, we propose several optimization techniques so that the translation procedure generates SQL queries that can be evaluated more efficiently over the underlying databases. We evaluate our approach using a synthetic benchmark and several real cases, and show positive results that we obtained. Direct Mapping (DM) is another W3C recommendation for the generation of RDF data from relational databases. While R2RML allows users to specify their own transformation rules, DM establishes fixed transformation rules. Although both recommendations were published at the same time, September 2012, there has not been any study regarding the relationship between them. The second part of this thesis focuses on the study of the relationship between R2RML and DM. We divide this study into two directions: from R2RML to DM, and from DM to R2RML. From R2RML to DM, we study a fragment of R2RML having the same expressive power than DM. From DM to R2RML, we represent DM transformation rules as R2RML mappings, and also add the implicit semantics encoded in databases, such as subclass, 1-N and N-N relationships. This thesis shows that by formalizing and optimizing R2RML-based SPARQL to SQL query translation, it is possible to use R2RML engines in real cases as the resulting SQL is efficient enough to be evaluated by the underlying relational databases. In addition to that, this thesis facilitates the understanding of bidirectional relationship between the two W3C recommendations, something that had not been studied before.
Resumo:
Although generalist predators have been reported to forage less efficiently than specialists, there is little information on the extent to which learning can improve the efficiency of mixed-prey foraging. Repeated exposure of silver perch to mixed prey (pelagic Artemia and benthic Chironomus larvae) led to substantial fluctuations in reward rate over relatively long (20-day) timescales. When perch that were familiar with a single prey type were offered two prey types simultaneously, the rate at which they captured both familiar and unfamiliar prey dropped progressively over succeeding trials. This result was not predicted by simple learning paradigms, but could be explained in terms of an interaction between learning and attention. Between-trial patterns in overall intake were complex and differed between the two prey types, but were unaffected by previous prey specialization. However, patterns of prey priority (i.e. the prey type that was preferred at the start of a trial) did vary with previous prey training. All groups of fish converged on the most profitable prey type (chironomids), but this process took 15-20 trials. In contrast, fish offered a single prey type reached asymptotic intake rates within five trials and retained high capture abilities for at least 5 weeks. Learning and memory allow fish to maximize foraging efficiency on patches of a single prey type. However, when foragers are faced with mixed prey populations, cognitive constraints associated with divided attention may impair efficiency, and this impairment can be exacerbated by experience. (c) 2005 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper extends existing understandings of how actors' constructions of ambiguity shape the emergent process of strategic action. We theoretically elaborate the role of rhetoric in exploiting strategic ambiguity, based on analysis of a longitudinal case study of an internationalization strategy within a business school. Our data show that actors use rhetoric to construct three types of strategic ambiguity: protective ambiguity that appeals to common values in order to protect particular interests, invitational ambiguity that appeals to common values in order to invite participation in particular actions, and adaptive ambiguity that enables the temporary adoption of specific values in order to appeal to a particular audience at one point in time. These rhetorical constructions of ambiguity follow a processual pattern that shapes the emergent process of strategic action. Our findings show that (1) the strategic actions that emerge are shaped by the way actors construct and exploit ambiguity, (2) the ambiguity intrinsic to the action is analytically distinct from ambiguity that is constructed and exploited by actors, and (3) ambiguity construction shifts over time to accommodate the emerging pattern of actions.
Resumo:
This paper explains how dynamic client portfolios can be a source of ambidexterity (i.e., exploration and exploitation) for knowledge-intensive firms (KIFs). Drawing from a unique qualitative dataset of firms in the global reinsurance market, we show how different types of client relationships underpin a dynamic client portfolio and become a source of ambidexterity for a KIF. We develop a process model to show how KIFs attain knowledge by segmenting their client portfolios, use that knowledge to explore and exploit within and across their client relationships, and dynamically adjust their client portfolios over time. Our study contributes to the literature on external sources of ambidexterity and dynamic management of client knowledge within KIFs.
Resumo:
The pressures on honeybee (Apis mellifera) populations, resulting from threats by modern pesticides, parasites, predators and diseases, have raised awareness of the economic importance and critical role this insect plays in agricultural societies across the globe. However, the association of humans with A. mellifera predates post-industrial-revolution agriculture, as evidenced by the widespread presence of ancient Egyptian bee iconography dating to the Old Kingdom (approximately 2400 bc)1. There are also indications of Stone Age people harvesting bee products; for example, honey hunting is interpreted from rock art2 in a prehistoric Holocene context and a beeswax find in a pre-agriculturalist site3. However, when and where the regular association of A. mellifera with agriculturalists emerged is unknown4. One of the major products of A. mellifera is beeswax, which is composed of a complex suite of lipids including n-alkanes, n-alkanoic acids and fatty acyl wax esters. The composition is highly constant as it is determined genetically through the insect’s biochemistry. Thus, the chemical ‘fingerprint’ of beeswax provides a reliable basis for detecting this commodity in organic residues preserved at archaeological sites, which we now use to trace the exploitation by humans of A. mellifera temporally and spatially. Here we present secure identifications of beeswax in lipid residues preserved in pottery vessels of Neolithic Old World farmers. The geographical range of bee product exploitation is traced in Neolithic Europe, the Near East and North Africa, providing the palaeoecological range of honeybees during prehistory. Temporally, we demonstrate that bee products were exploited continuously, and probably extensively in some regions, at least from the seventh millennium cal bc, likely fulfilling a variety of technological and cultural functions. The close association of A. mellifera with Neolithic farming communities dates to the early onset of agriculture and may provide evidence for the beginnings of a domestication process.
Resumo:
The RAGE Exploitation Plan is a living document, to be upgraded along the project lifecycle, supporting RAGE partners in defining how the results of the RAGE RIA will be used both in commercial and non-comercial settings. The Exploitation Plan covers the entire process from the definition of the business case for the RAGE Ecosystem to the creation of the sustainability conditions for its real-world operation beyond the H2020 project co-funding period. The Exploitation Plan will be published in three incremental versions, due at months 18, 36 and 42 of the project lifetime. This early stage version 1 of 3 is mainly devoted to: i. Setting-up the structure and the initial building blocks to be populated and completed in the future editions of the Exploitation Plan and to ii. providing additional guidance for market intelligence gathering, business modelling definition and validation, outreach and industry engagement and ultimately providing insights for the development, validation and evaluation of RAGE results across the project´s workplan execution. These tasks will in turn render suitable inputs to enhance the two future editions of the Exploitation Plan.
Resumo:
Energy transition is the response of humankind to the concerning effects of fossil fuels depletion, climate change and energy insecurity, and calls for a deep penetration of renewable energy sources (RESs) in power systems and industrial processes. Despite the high potentials, low impacts and long-term availability, RESs present some limits which need to be overcome, such as the strong variability and difficult predictability, which result in scarce reliability and difficult applicability in steady-state processes. Some technological solutions relate to energy storage systems, equipment electrification and hybrid systems deployment, thus accomplishing distributed generation even in remote sites as offshore. However, all of these actions cannot disregard sustainability, which represents a founding principle for any project, bringing together economics, reliability and environmental protection. To entail sustainability in RESs-based innovative projects, previous knowledge and tools are often not tailored or miss the novel objectives. This research proposes three methodological approaches, bridging the gaps. The first contribute adapts literature-based indicators of inherent safety and energy efficiency to capture the specificities of novel process plants and hybrid systems. Minor case studies dealing with novel P2X processes exemplify the application of these novel indicators. The second method guides the conceptual design of hybrid systems for the valorisation of a RES in a site, by considering the sustainability performances of alternative design options. Its application is demonstrated through the comparison of two offshore sites where wave energy can be valorised. Finally, “OHRES”, a comprehensive tool for the sustainable optimisation of hybrid renewable energy systems is proposed. “OHRES” hinges on the exploitation of multiple RESs, by converting ex-post sustainability indicators into discrimination markers screening a large number of possible system configurations, according to the location features. Five case studies demonstrate “OHRES” versatility in the sustainable valorisation of multiple RESs.
Resumo:
The notion of commodification is a fascinating one. It entails many facets, ranging from subjective debates on desirability of commodification to in depth economic analyses of objects of value and their corresponding markets. Commodity theory is therefore not just defined by a single debate, but spans a plethora of different discussions. This thesis maps and situates those theories and debates and selects one specific strain to investigate further. This thesis argues that commodity theory in its optima forma deals with the investigation into what sets commodities apart from non-commodities. It proceeds to examine the many given answers to this question by scholars ranging from the mid 1800’s to the late 2000’s. Ultimately, commodification is defined as a process in which an object becomes an element of the total wealth of societies in which the capitalist mode of production prevails. In doing so, objects must meet observables, or indicia, of commodification provided by commodity theories. Problems arise when objects are clearly part of the total wealth in societies without meeting established commodity indicia. In such cases, objects are part of the total wealth of a society without counting as a commodity. This thesis examines this phenomenon in relation to the novel commodities of audiences and data. It explains how these non-commodities (according to classical theories) are still essential elements of industry. The thesis then takes a deep dive into commodity theory using the theory on the construction of social reality by John Searle.