923 resultados para fixed offshore platform


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sharing sensor data between multiple devices and users can be^challenging for naive users, and requires knowledge of programming and use of different communication channels and/or development tools, leading to non uniform solutions. This thesis proposes a system that allows users to access sensors, share sensor data and manage sensors. With this system we intent to manage devices, share sensor data, compare sensor data, and set policies to act based on rules. This thesis presents the design and implementation of the system, as well as three case studies of its use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spontaneous volunteers always emerge under emergency scenarios and are vital to a successful community response, yet some uncertainty subsists around their role and its inherent acceptance by official entities under emergency scenarios. In our research we have identified that most of the spontaneous volunteers do have none or little support from official entities, hence they end up facing critical problems as situational awareness, safety instructions and guidance, motivation and group organization. We argue that official entities still play a crucial role and should change some of their behaviors regarding spontaneous volunteerism. We aim with this thesis to design a software architecture and a framework in order to implement a solution to support spontaneous volunteerism under emergency scenarios along with a set of guidelines for the design of open information management systems. Together with the collaboration from both citizens and emergency professionals we have been able to attain several important contributions, as the clear identification of the roles taken by both spontaneous volunteers and professionals, the importance of volunteerism in overall community response and the role which open collaborative information management systems have in the community volunteering efforts. These conclusions have directly supported the design guidelines of our software solution proposal. In what concerns to methodology, we first review literature on technologies support to emergencies and how spontaneous volunteers actually challenge these systems. Following, we have performed a field research where we have observed that the emerging of spontaneous volunteer’s efforts imposes new requirements for the design of such systems, which leaded to the creation of a cluster of design guidelines that supported our software solution proposal to address the volunteers’ requirements. Finally we have architected and developed an online open information management tool which has been evaluated via usability engineering methods, usability user tests and heuristic evaluations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the current proliferation of sensor equipped mobile devices such as smartphones and tablets, location aware services are expanding beyond the mere efficiency and work related needs of users, evolving in order to incorporate fun, culture and the social life of users. Today people on the move have more and more connectivity and are expected to be able to communicate with their usual and familiar social networks. That means communications not only with their peers and colleagues, friends and family but also with unknown people that might share their interests, curiosities or happen to use the same social network. Through social networks, location aware blogging, cultural mobile applications relevant information is now available at specific geographical locations and open to feedback and conversations among friends as well as strangers. In fact, nowadays smartphone technologies aloud users to post and retrieve content while on the move, often relating to specific physical landmarks or locations, engaging and being engaged in conversations with strangers as much as their own social network. The use of such technologies and applications while on the move can often lead people to serendipitous discoveries and interactions. Throughout our thesis we are engaging on a two folded investigation: how can we foster and support serendipitous discoveries and what are the best interfaces for it? In fact, to read and write content while on the move is a cognitively intensive task. While the map serves the function of orienting the user, it also absorbs most of the user’s concentration. In order to address this kind of cognitive overload issue with Breadcrumbs we propose a 360 degrees interface that enables the user to find content around them by means of scanning the surrounding space with the mobile device. By using a loose metaphor of a periscope, harnessing the power of the smartphone sensors we designed an interactive interface capable of detecting content around the users and display it in the form of 2 dimensional bubbles which diameter depends on their distance from the users. Users will navigate the space in relation to the content that they are curious about, rather than in relation to the traditional geographical map. Through this model we envisage alleviating a certain cognitive overload generated by having to continuously confront a two dimensional map with the real three dimensional space surrounding the user, but also use the content as a navigational filter. Furthermore this alternative mean of navigating space might bring serendipitous discovery about places that user where not aware of or intending to reach. We hence conclude our thesis with the evaluation of the Breadcrumbs application and the comparison of the 360 degrees interface with a traditional 2 dimensional map displayed on the devise screen. Results from the evaluation are compiled in findings and insights for future use in designing and developing context aware mobile applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Offshore wind power emits low amounts of gases, is renewable and has better performance than onshore due to its greater stability and higher wind power density, less visual and noise impact, among others. Brazil has a high capacity of generation, but has not yet developed any offshore projects. High costs are a strong impediment. This study is an effort towards pricing offshore resources through Livelized Cost of Energy - LCOE, which represents the minimum return to cover the costs of development, production and maintenance of a wind project. Initially LCOE was calculated for all Brazilian onshore wind farms listed at Bloomberg New Energy Finance R○, accounting for 71 farms. Then hypothetical offshore wind farms were created from the onshore farms, tripling the cost of generation, which is consistent with the literature, and estimating the offshore energy for two locations off the Brazilian coast using satellite data extracted from National Oceanic and Atmospheric Administration. The results demonstrate that offshore resources have the potential to significantly reduce the energy price due to the better performance of the wind at sea

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The consumption of energy on the planet is currently based on fossil fuels. They are responsible for adverse effects on the environment. Renewables propose solutions for this scenario, but must face issues related to the capacity of the power supply. Wind energy offshore emerging as a promising alternative. The speed and stability are greater winds over oceans, but the variability of these may cause inconvenience to the generation of electric power fluctuations. To reduce this, a combination of wind farms geographically distributed was proposed. The greater the distance between them, the lower the correlation between the wind velocity, increasing the likelihood that together achieve more stable power system with less fluctuations in power generation. The efficient use of production capacity of the wind park however, depends on their distribution in marine environments. The objective of this research was to analyze the optimal allocation of wind farms offshore on the east coast of the U.S. by Modern Portfolio Theory. The Modern Portfolio Theory was used so that the process of building portfolios of wind energy offshore contemplate the particularity of intermittency of wind, through calculations of return and risk of the production of wind farms. The research was conducted with 25.934 observations of energy produced by wind farms 11 hypothetical offshore, from the installation of 01 simulated ocean turbine with a capacity of 5 MW. The data show hourly time resolution and covers the period between January 1, 1998 until December 31, 2002. Through the Matlab R software, six were calculated minimum variance portfolios, each for a period of time distinct. Given the inequality of the variability of wind over time, set up four strategies rebalancing to evaluate the performance of the related portfolios, which enabled us to identify the most beneficial to the stability of the wind energy production offshore. The results showed that the production of wind energy for 1998, 1999, 2000 and 2001 should be considered by the portfolio weights calculated for the same periods, respectively. Energy data for 2002 should use the weights derived from the portfolio calculated in the previous time period. Finally, the production of wind energy in the period 1998-2002 should also be weighted by 1/11. It follows therefore that the portfolios found failed to show reduced levels of variability when compared to the individual production of wind farms hypothetical offshore

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The new oil reservoirs discoveries in onshore and ultra deep water offshore fields and complex trajectories require the optimization of procedures to reduce the stops operation during the well drilling, especially because the platforms and equipment high cost, and risks which are inherent to the operation. Among the most important aspects stands out the drilling fluids project and their behavior against different situations that may occur during the process. By means of sedimentation experiments, a correlation has been validated to determe the sedimentation particles velocity in variable viscosity fluids over time, applying the correction due to effective viscosity that is a shear rate and time function. The viscosity evolution over time was obtained by carrying out rheologic tests using a fixed shear rate, small enough to not interfere in the fluid gelling process. With the sedimentation particles velocity and the fluid viscosity over time equations an iterative procedure was proposed to determine the particles displacement over time. These equations were implemented in a case study to simulate the cuttings sedimentation generated in the oil well drilling during stops operation, especially in the connections and tripping, allowing the drilling fluid project in order to maintain the cuttings in suspension, avoiding risks, such as stuck pipe and in more drastic conditions, the loss of the well

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of oil wells drilling requires additional cares mainly if the drilling is in offshore ultra deep water with low overburden pressure gradients which cause low fracture gradients and, consequently, difficult the well drilling by the reduction of the operational window. To minimize, in the well planning phases, the difficulties faced by the drilling in those sceneries, indirect models are used to estimate fracture gradient that foresees approximate values for leakoff tests. These models generate curves of geopressures that allow detailed analysis of the pressure behavior for the whole well. Most of these models are based on the Terzaghi equation, just differentiating in the determination of the values of rock tension coefficient. This work proposes an alternative method for prediction of fracture pressure gradient based on a geometric correlation that relates the pressure gradients proportionally for a given depth and extrapolates it for the whole well depth, meaning that theses parameters vary in a fixed proportion. The model is based on the application of analytical proportion segments corresponding to the differential pressure related to the rock tension. The study shows that the proposed analytical proportion segments reaches values of fracture gradient with good agreement with those available for leakoff tests in the field area. The obtained results were compared with twelve different indirect models for fracture pressure gradient prediction based on the compacting effect. For this, a software was developed using Matlab language. The comparison was also made varying the water depth from zero (onshore wellbores) to 1500 meters. The leakoff tests are also used to compare the different methods including the one proposed in this work. The presented work gives good results for error analysis compared to other methods and, due to its simplicity, justify its possible application

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: In bovines, more efficient management practices are important for maximizing profitability. In order to increase the pregnancy rates in artificial insemination (AI) programs, several hormonal protocols were developed to synchronize the follicular wave and the moment of ovulation in beef and dairy cattle. In dairy cattle, detection of estrus can be difficult due to a number of factors including the incidence of silent estrus. Hormonal treatments designed to control both luteal and follicular function has permitting efficient synchronizations of time of ovulation. Thus, the AI can be performed in a large number of animals on a fixed schedule without the need for detection of estrus. Using these management techniques, the fixed-time artificial insemination (TAI) can overcome the problem of accurate estrus detection and help in reducing the incidence of repeat breeding. In addition, with TAI in cattle operations, it is possible to facilitate management practices and commercialization, and to reduce the time and semen wasting with animals inseminated at incorrect times. The investigation of practical and efficient TAI protocols is important for reducing the labor and animal handling of TAI in dairy cattle, as well as for increasing the profitability of the cattle management system. This study was carried out in order to investigate the effectiveness of TAI in dairy heifers treated with a practical progesterone-based protocol.Materials, Methods & Results: This experiment was conducted at the university farm located in southwestern Brazil, during May 2009. Thirty-nine cycling crossbred dairy heifers were employed in this study. All animals received a single intramuscular injection of estradiol benzoate and intravaginal progesterone releasing device in a random stage of the estrous cycle (Day 0). on day 7 the animals were treated with PGF2a analogue and on day 9 the device was removed. Forty-eight hours after the device removal (day 11) a synthetic analogue of GnRH was administered and the animals were fixed-time artificially inseminated at the time of GnRH injection. The inseminations were performed using four different batches from the same Holstein bull. Among the heifers that were synchronized (87.2%), 30.8% ovulated until 24 h after TAI and 56.4% ovulated between 24 and 32 h after TAI. The conception rate was 61.5%. No effects of ovulation time in conception rates were detected. The conception rate from heifers that ovulated until 24 h after TAI was 58.3% and from heifers that ovulated between 24 and 32 h after TAI was 77.3%. The mean of ovulatory follicle in heifers that ovulated until 24 h was 14.3 mm and in heifers that ovulated between 24 and 32 h was 11.9 mm.Discussion: Taking together, the findings of the present study, along with those of others, emphasize the concept that development of practical methods for TAI offers significant advantages to dairy producers if conception rates are close or greater to those obtained after breeding at detected estrus. Thus, the results of the present study reinforce the possibility of making dairy cattle production more cost-effective using TAI. In conclusion, with the progesterone-based TAI protocol of the present experiment all synchronized animals ovulated up to 32 h after GnRH+TAI and no effects of ovulation time related to conception rate was detected. The exogenous control of luteal and follicular development facilitated the reproductive management and animal handling. Also, inseminating the heifers at the moment of GnRH injection in a progesterone-based TAI protocol is a practical strategy and provided satisfactory results regarding ovulation and conception rates in dairy heifers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fifteen cases of viral meningoencephalitis in Colombian cattle were tested by nested PCR analysis for the detection of bovine herpesvirus 5 (BoHV-5). All fatal cases had shown severe neurological signs and had occurred following natural outbreaks of the disease. The neurological infection was histologically characterized by mild to moderate inflammatory changes in the brain and cerebellum, including meningitis, mononuclear perivascular cuffing, gliosis, haemorrhage, and the presence of Gitter cells (macrophages) accompanying large areas of malacia. No intranuclear inclusion bodies were seen in any of the cases. Results from BoHV-5 molecular extraction analyses showed there were five positive cases thus confirming the presence of the virus in Colombia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work intends to analyze the behavior of the gas flow of plunger lift wells producing to well testing separators in offshore production platforms to aim a technical procedure to estimate the gas flow during the slug production period. The motivation for this work appeared from the expectation of some wells equipped with plunger lift method by PETROBRAS in Ubarana sea field located at Rio Grande do Norte State coast where the produced fluids measurement is made in well testing separators at the platform. The oil artificial lift method called plunger lift is used when the available energy of the reservoir is not high enough to overcome all the necessary load losses to lift the oil from the bottom of the well to the surface continuously. This method consists, basically, in one free piston acting as a mechanical interface between the formation gas and the produced liquids, greatly increasing the well s lifting efficiency. A pneumatic control valve is mounted at the flow line to control the cycles. When this valve opens, the plunger starts to move from the bottom to the surface of the well lifting all the oil and gas that are above it until to reach the well test separator where the fluids are measured. The well test separator is used to measure all the volumes produced by the well during a certain period of time called production test. In most cases, the separators are designed to measure stabilized flow, in other words, reasonably constant flow by the use of level and pressure electronic controllers (PLC) and by assumption of a steady pressure inside the separator. With plunger lift wells the liquid and gas flow at the surface are cyclical and unstable what causes the appearance of slugs inside the separator, mainly in the gas phase, because introduce significant errors in the measurement system (e.g.: overrange error). The flow gas analysis proposed in this work is based on two mathematical models used together: i) a plunger lift well model proposed by Baruzzi [1] with later modifications made by Bolonhini [2] to built a plunger lift simulator; ii) a two-phase separator model (gas + liquid) based from a three-phase separator model (gas + oil + water) proposed by Nunes [3]. Based on the models above and with field data collected from the well test separator of PUB-02 platform (Ubarana sea field) it was possible to demonstrate that the output gas flow of the separator can be estimate, with a reasonable precision, from the control signal of the Pressure Control Valve (PCV). Several models of the System Identification Toolbox from MATLAB® were analyzed to evaluate which one better fit to the data collected from the field. For validation of the models, it was used the AIC criterion, as well as a variant of the cross validation criterion. The ARX model performance was the best one to fit to the data and, this way, we decided to evaluate a recursive algorithm (RARX) also with real time data. The results were quite promising that indicating the viability to estimate the output gas flow rate from a plunger lift well producing to a well test separator, with the built-in information of the control signal to the PCV

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simulations based on cognitively rich agents can become a very intensive computing task, especially when the simulated environment represents a complex system. This situation becomes worse when time constraints are present. This kind of simulations would benefit from a mechanism that improves the way agents perceive and react to changes in these types of environments. In other worlds, an approach to improve the efficiency (performance and accuracy) in the decision process of autonomous agents in a simulation would be useful. In complex environments, and full of variables, it is possible that not every information available to the agent is necessary for its decision-making process, depending indeed, on the task being performed. Then, the agent would need to filter the coming perceptions in the same as we do with our attentions focus. By using a focus of attention, only the information that really matters to the agent running context are perceived (cognitively processed), which can improve the decision making process. The architecture proposed herein presents a structure for cognitive agents divided into two parts: 1) the main part contains the reasoning / planning process, knowledge and affective state of the agent, and 2) a set of behaviors that are triggered by planning in order to achieve the agent s goals. Each of these behaviors has a runtime dynamically adjustable focus of attention, adjusted according to the variation of the agent s affective state. The focus of each behavior is divided into a qualitative focus, which is responsible for the quality of the perceived data, and a quantitative focus, which is responsible for the quantity of the perceived data. Thus, the behavior will be able to filter the information sent by the agent sensors, and build a list of perceived elements containing only the information necessary to the agent, according to the context of the behavior that is currently running. Based on the human attention focus, the agent is also dotted of a affective state. The agent s affective state is based on theories of human emotion, mood and personality. This model serves as a basis for the mechanism of continuous adjustment of the agent s attention focus, both the qualitative and the quantative focus. With this mechanism, the agent can adjust its focus of attention during the execution of the behavior, in order to become more efficient in the face of environmental changes. The proposed architecture can be used in a very flexibly way. The focus of attention can work in a fixed way (neither the qualitative focus nor the quantitaive focus one changes), as well as using different combinations for the qualitative and quantitative foci variation. The architecture was built on a platform for BDI agents, but its design allows it to be used in any other type of agents, since the implementation is made only in the perception level layer of the agent. In order to evaluate the contribution proposed in this work, an extensive series of experiments were conducted on an agent-based simulation over a fire-growing scenario. In the simulations, the agents using the architecture proposed in this work are compared with similar agents (with the same reasoning model), but able to process all the information sent by the environment. Intuitively, it is expected that the omniscient agent would be more efficient, since they can handle all the possible option before taking a decision. However, the experiments showed that attention-focus based agents can be as efficient as the omniscient ones, with the advantage of being able to solve the same problems in a significantly reduced time. Thus, the experiments indicate the efficiency of the proposed architecture

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing